首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
New hybridized discontinuous Galerkin (HDG) methods for the interface problem for elliptic equations are proposed. Unknown functions of our schemes are \(u_h\) in elements and \(\hat{u}_h\) on inter-element edges. That is, we formulate our schemes without introducing the flux variable. We assume that subdomains \(\Omega _1\) and \(\Omega _2\) are polyhedral domains and that the interface \(\Gamma =\partial \Omega _1\cap \partial \Omega _2\) is polyhedral surface or polygon. Moreover, \(\Gamma \) is assumed to be expressed as the union of edges of some elements. We deal with the case where the interface is transversely connected with the boundary of the whole domain \(\overline{\Omega }=\overline{\Omega _1\cap \Omega _2}\). Consequently, the solution u of the interface problem may not have a sufficient regularity, say \(u\in H^2(\Omega )\) or \(u|_{\Omega _1}\in H^2(\Omega _1)\), \(u|_{\Omega _2}\in H^2(\Omega _2)\). We succeed in deriving optimal order error estimates in an HDG norm and the \(L^2\) norm under low regularity assumptions of solutions, say \(u|_{\Omega _1}\in H^{1+s}(\Omega _1)\) and \(u|_{\Omega _2}\in H^{1+s}(\Omega _2)\) for some \(s\in (1/2,1]\), where \(H^{1+s}\) denotes the fractional order Sobolev space. Numerical examples to validate our results are also presented.  相似文献   

2.
Spheroidal harmonics and modified Bessel functions have wide applications in scientific and engineering computing. Recursive methods are developed to compute the logarithmic derivatives, ratios, and products of the prolate spheroidal harmonics (\(P_n^m(x)\), \(Q_n^m(x)\), \(n\ge m\ge 0\), \(x>1\)), the oblate spheroidal harmonics (\(P_n^m(ix)\), \(Q_n^m(ix)\), \(n\ge m\ge 0\), \(x>0\)), and the modified Bessel functions (\(I_n(x)\), \(K_n(x)\), \(n\ge 0\), \(x>0\)) in order to avoid direct evaluation of these functions that may easily cause overflow/underflow for high degree/order and for extreme argument. Stability analysis shows the proposed recursive methods are stable for realistic degree/order and argument values. Physical examples in electrostatics are given to validate the recursive methods.  相似文献   

3.
A new weak Galerkin (WG) finite element method is developed and analyzed for solving second order elliptic problems with low regularity solutions in the Sobolev space \(W^{2,p}(\Omega )\) with \(p\in (1,2)\). A WG stabilizer was introduced by Wang and Ye (Math Comput 83:2101–2126, 2014) for a simpler variational formulation, and it has been commonly used since then in the WG literature. In this work, for the purpose of dealing with low regularity solutions, we propose to generalize the stabilizer of Wang and Ye by introducing a positive relaxation index to the mesh size h. The relaxed stabilization gives rise to a considerable flexibility in treating weak continuity along the interior element edges. When the norm index \(p\in (1,2]\), we strictly derive that the WG error in energy norm has an optimal convergence order \(O(h^{l+1-\frac{1}{p}-\frac{p}{4}})\) by taking the relaxed factor \(\beta =1+\frac{2}{p}-\frac{p}{2}\), and it also has an optimal convergence order \(O(h^{l+2-\frac{2}{p}})\) in \(L^2\) norm when the solution \(u\in W^{l+1,p}\) with \(p\in [1,1+\frac{2}{p}-\frac{p}{2}]\) and \(l\ge 1\). It is recovered for \(p=2\) that with the choice of \(\beta =1\), error estimates in the energy and \(L^2\) norms are optimal for the source term in the sobolev space \(L^2\). Weak variational forms of the WG method give rise to desirable flexibility in enforcing boundary conditions and can be easily implemented without requiring a sufficiently large penalty factor as in the usual discontinuous Galerkin methods. In addition, numerical results illustrate that the proposed WG method with an over-relaxed factor \(\beta (\ge 1)\) converges at optimal algebraic rates for several low regularity elliptic problems.  相似文献   

4.
We investigate a noncooperative bargaining game for partitioning n agents into non-overlapping coalitions. The game has n time periods during which the players are called according to an exogenous agenda to propose offers. With probability \(\delta \), the game ends during any time period \(t<n\). If it does, the first t players on the agenda get a chance to propose but the others do not. Thus, \(\delta \) is a measure of the degree of democracy within the game (ranging from democracy for \(\delta =0\), through increasing levels of authoritarianism as \(\delta \) approaches 1, to dictatorship for \(\delta =1\)). We determine the subgame perfect equilibrium (SPE) and study how a player’s position on the agenda affects his bargaining power. We analyze the relation between the distribution of power of individual players, the level of democracy, and the welfare efficiency of the game. We find that purely democratic games are welfare inefficient and that introducing a degree of authoritarianism into the game makes the distribution of power more equitable and also maximizes welfare. These results remain invariant under two types of player preferences: one where each player’s preference is a total order on the space of possible coalition structures and the other where each player either likes or dislikes a coalition structure. Finally, we show that the SPE partition may or may not be core stable.  相似文献   

5.
We analyze rigorously error estimates and compare numerically spatial/temporal resolution of various numerical methods for the discretization of the Dirac equation in the nonrelativistic limit regime, involving a small dimensionless parameter \(0<\varepsilon \ll 1\) which is inversely proportional to the speed of light. In this limit regime, the solution is highly oscillatory in time, i.e. there are propagating waves with wavelength \(O(\varepsilon ^2)\) and O(1) in time and space, respectively. We begin with several frequently used finite difference time domain (FDTD) methods and obtain rigorously their error estimates in the nonrelativistic limit regime by paying particular attention to how error bounds depend explicitly on mesh size h and time step \(\tau \) as well as the small parameter \(\varepsilon \). Based on the error bounds, in order to obtain ‘correct’ numerical solutions in the nonrelativistic limit regime, i.e. \(0<\varepsilon \ll 1\), the FDTD methods share the same \(\varepsilon \)-scalability on time step and mesh size as: \(\tau =O(\varepsilon ^3)\) and \(h=O(\sqrt{\varepsilon })\). Then we propose and analyze two numerical methods for the discretization of the Dirac equation by using the Fourier spectral discretization for spatial derivatives combined with the symmetric exponential wave integrator and time-splitting technique for temporal derivatives, respectively. Rigorous error bounds for the two numerical methods show that their \(\varepsilon \)-scalability is improved to \(\tau =O(\varepsilon ^2)\) and \(h=O(1)\) when \(0<\varepsilon \ll 1\). Extensive numerical results are reported to support our error estimates.  相似文献   

6.
We address the problem of counting emitted photons in two-photon laser scanning microscopy. Following a laser pulse, photons are emitted after exponentially distributed waiting times. Modeling the counting process is of interest because photon detectors have a dead period after a photon is detected that leads to an underestimate of the count of emitted photons. We describe a model which has a Poisson \((\alpha )\) number N of photons emitted, and a dead period \(\Delta \) that is standardized by the fluorescence time constant \(\tau (\delta = \Delta /\tau )\), and an observed count D. The estimate of \(\alpha \) determines the intensity of a single pixel in an image. We first derive the distribution of D and study its properties. We then use it to estimate \(\alpha \) and \(\delta \) simultaneously by maximum likelihood. We show that our results improve the signal-to-noise ratio, hence the quality of actual images.  相似文献   

7.
Given a distributed system of \(n\) balls and \(n\) bins, how evenly can we distribute the balls to the bins, minimizing communication? The fastest non-adaptive and symmetric algorithm achieving a constant maximum bin load requires \(\varTheta (\log \log n)\) rounds, and any such algorithm running for \(r\in {\mathcal {O}}(1)\) rounds incurs a bin load of \(\varOmega ((\log n/\log \log n)^{1/r})\). In this work, we explore the fundamental limits of the general problem. We present a simple adaptive symmetric algorithm that achieves a bin load of 2 in \(\log ^* n+{\mathcal {O}}(1)\) communication rounds using \({\mathcal {O}}(n)\) messages in total. Our main result, however, is a matching lower bound of \((1-o(1))\log ^* n\) on the time complexity of symmetric algorithms that guarantee small bin loads. The essential preconditions of the proof are (i) a limit of \({\mathcal {O}}(n)\) on the total number of messages sent by the algorithm and (ii) anonymity of bins, i.e., the port numberings of balls need not be globally consistent. In order to show that our technique yields indeed tight bounds, we provide for each assumption an algorithm violating it, in turn achieving a constant maximum bin load in constant time.  相似文献   

8.
We study the following energy-efficient scheduling problem. We are given a set of n jobs which have to be scheduled by a single processor whose speed can be varied dynamically. Each job \(J_j\) is characterized by a processing requirement (work) \(p_j\), a release date \(r_j\), and a deadline \(d_j\). We are also given a budget of energy E which must not be exceeded and our objective is to maximize the throughput (i.e., the number of jobs which are completed on time). We show that the problem can be solved optimally via dynamic programming in \(O(n^4 \log n \log P)\) time when all jobs have the same release date, where P is the sum of the processing requirements of the jobs. For the more general case with agreeable deadlines where the jobs can be ordered so that, for every \(i < j\), it holds that \(r_i \le r_j\) and \(d_i \le d_j\), we propose an optimal dynamic programming algorithm which runs in \(O(n^6 \log n \log P)\) time. In addition, we consider the weighted case where every job \(J_j\) is also associated with a weight \(w_j\) and we are interested in maximizing the weighted throughput (i.e., the total weight of the jobs which are completed on time). For this case, we show that the problem becomes \(\mathcal{NP}\)-hard in the ordinary sense even when all jobs have the same release date and we propose a pseudo-polynomial time algorithm for agreeable instances.  相似文献   

9.
A method for calculating the one-way quantum deficit is developed. It involves a careful study of post-measured entropy shapes. We discovered that in some regions of X-state space the post-measured entropy \(\tilde{S}\) as a function of measurement angle \(\theta \in [0,\pi /2]\) exhibits a bimodal behavior inside the open interval \((0,\pi /2)\), i.e., it has two interior extrema: one minimum and one maximum. Furthermore, cases are found when the interior minimum of such a bimodal function \(\tilde{S}(\theta )\) is less than that one at the endpoint \(\theta =0\) or \(\pi /2\). This leads to the formation of a boundary between the phases of one-way quantum deficit via finite jumps of optimal measured angle from the endpoint to the interior minimum. Phase diagram is built up for a two-parameter family of X states. The subregions with variable optimal measured angle are around 1\(\%\) of the total region, with their relative linear sizes achieving \(17.5\%\), and the fidelity between the states of those subregions can be reduced to \(F=0.968\). In addition, a correction to the one-way deficit due to the interior minimum can achieve \(2.3\%\). Such conditions are favorable to detect the subregions with variable optimal measured angle of one-way quantum deficit in an experiment.  相似文献   

10.
In this paper, we study quantum codes over \(F_q\) from cyclic codes over \(F_q+uF_q+vF_q+uvF_q,\) where \(u^2=u,~v^2=v,~uv=vu,~q=p^m\), and p is an odd prime. We give the structure of cyclic codes over \(F_q+uF_q+vF_q+uvF_q\) and obtain self-orthogonal codes over \(F_q\) as Gray images of linear and cyclic codes over \(F_q+uF_q+vF_q+uvF_q\). In particular, we decompose a cyclic code over \(F_q+uF_q+vF_q+uvF_q\) into four cyclic codes over \(F_q\) to determine the parameters of the corresponding quantum code.  相似文献   

11.
For the XXZ subclass of symmetric two-qubit X states, we study the behavior of quantum conditional entropy \(S_{cond}\) as a function of measurement angle \(\theta \in [0,\pi /2]\). Numerical calculations show that the function \(S_{cond}(\theta )\) for X states can have at most one local extremum in the open interval from zero to \(\pi /2\) (unimodality property). If the extremum is a minimum, the quantum discord displays region with variable (state-dependent) optimal measurement angle \(\theta ^*\). Such \(\theta \)-regions (phases, fractions) are very tiny in the space of X-state parameters. We also discover the cases when the conditional entropy has a local maximum inside the interval \((0,\pi /2)\). It is remarkable that the maxima exist in surprisingly wide regions, and the boundaries for such regions are defined by the same bifurcation conditions as for those with a minimum.  相似文献   

12.
We construct two sets of incomplete and extendible quantum pure orthogonal product states (POPS) in general bipartite high-dimensional quantum systems, which are all indistinguishable by local operations and classical communication. The first set of POPS is composed of two parts which are \(\mathcal {C}^m\otimes \mathcal {C}^{n_1}\) with \(5\le m\le n_1\) and \(\mathcal {C}^m\otimes \mathcal {C}^{n_2}\) with \(5\le m \le n_2\), where \(n_1\) is odd and \(n_2\) is even. The second one is in \(\mathcal {C}^m\otimes \mathcal {C}^n\) \((m, n\ge 4)\). Some subsets of these two sets can be extended into complete sets that local indistinguishability can be decided by noncommutativity which quantifies the quantumness of a quantum ensemble. Our study shows quantum nonlocality without entanglement.  相似文献   

13.
We begin by investigating relationships between two forms of Hilbert–Schmidt two-rebit and two-qubit “separability functions”—those recently advanced by Lovas and Andai (J Phys A Math Theor 50(29):295303, 2017), and those earlier presented by Slater (J Phys A 40(47):14279, 2007). In the Lovas–Andai framework, the independent variable \(\varepsilon \in [0,1]\) is the ratio \(\sigma (V)\) of the singular values of the \(2 \times 2\) matrix \(V=D_2^{1/2} D_1^{-1/2}\) formed from the two \(2 \times 2\) diagonal blocks (\(D_1, D_2\)) of a \(4 \times 4\) density matrix \(D= \left||\rho _{ij}\right||\). In the Slater setting, the independent variable \(\mu \) is the diagonal-entry ratio \(\sqrt{\frac{\rho _{11} \rho _ {44}}{\rho _ {22} \rho _ {33}}}\)—with, of central importance, \(\mu =\varepsilon \) or \(\mu =\frac{1}{\varepsilon }\) when both \(D_1\) and \(D_2\) are themselves diagonal. Lovas and Andai established that their two-rebit “separability function” \(\tilde{\chi }_1 (\varepsilon )\) (\(\approx \varepsilon \)) yields the previously conjectured Hilbert–Schmidt separability probability of \(\frac{29}{64}\). We are able, in the Slater framework (using cylindrical algebraic decompositions [CAD] to enforce positivity constraints), to reproduce this result. Further, we newly find its two-qubit, two-quater[nionic]-bit and “two-octo[nionic]-bit” counterparts, \(\tilde{\chi _2}(\varepsilon ) =\frac{1}{3} \varepsilon ^2 \left( 4-\varepsilon ^2\right) \), \(\tilde{\chi _4}(\varepsilon ) =\frac{1}{35} \varepsilon ^4 \left( 15 \varepsilon ^4-64 \varepsilon ^2+84\right) \) and \(\tilde{\chi _8} (\varepsilon )= \frac{1}{1287}\varepsilon ^8 \left( 1155 \varepsilon ^8-7680 \varepsilon ^6+20160 \varepsilon ^4-25088 \varepsilon ^2+12740\right) \). These immediately lead to predictions of Hilbert–Schmidt separability/PPT-probabilities of \(\frac{8}{33}\), \(\frac{26}{323}\) and \(\frac{44482}{4091349}\), in full agreement with those of the “concise formula” (Slater in J Phys A 46:445302, 2013), and, additionally, of a “specialized induced measure” formula. Then, we find a Lovas–Andai “master formula,” \(\tilde{\chi _d}(\varepsilon )= \frac{\varepsilon ^d \Gamma (d+1)^3 \, _3\tilde{F}_2\left( -\frac{d}{2},\frac{d}{2},d;\frac{d}{2}+1,\frac{3 d}{2}+1;\varepsilon ^2\right) }{\Gamma \left( \frac{d}{2}+1\right) ^2}\), encompassing both even and odd values of d. Remarkably, we are able to obtain the \(\tilde{\chi _d}(\varepsilon )\) formulas, \(d=1,2,4\), applicable to full (9-, 15-, 27-) dimensional sets of density matrices, by analyzing (6-, 9, 15-) dimensional sets, with not only diagonal \(D_1\) and \(D_2\), but also an additional pair of nullified entries. Nullification of a further pair still leads to X-matrices, for which a distinctly different, simple Dyson-index phenomenon is noted. C. Koutschan, then, using his HolonomicFunctions program, develops an order-4 recurrence satisfied by the predictions of the several formulas, establishing their equivalence. A two-qubit separability probability of \(1-\frac{256}{27 \pi ^2}\) is obtained based on the operator monotone function \(\sqrt{x}\), with the use of \(\tilde{\chi _2}(\varepsilon )\).  相似文献   

14.
Forecasting air-pollutant levels is an important issue, due to their adverse effects on public health, and often a legislative necessity. The advantage of Bayesian methods is their ability to provide density predictions which can easily be transformed into ordinal or binary predictions given a set of thresholds. We develop a Bayesian approach to forecasting PM\(_{10}\) and O\(_3\) levels that efficiently deals with extensive amounts of input parameters, and test whether it outperforms classical models and experts. The new approach is used to fit models for PM\(_{10}\) and O\(_3\) level forecasting that can be used in daily practice. We also introduce a novel approach for comparing models to experts based on estimated cost matrices. The results for diverse air quality monitoring sites across Slovenia show that Bayesian models outperform classical models in both PM\(_{10}\) and O\(_3\) predictions. The proposed models perform better than experts in PM\(_{10}\) and are on par with experts in O\(_3\) predictions—where experts already base their predictions on predictions from a statistical model. A Bayesian approach—especially using Gaussian processes—offers several advantages: superior performance, robustness to overfitting, more information, and the ability to efficiently adapt to different cost matrices.  相似文献   

15.
In many parallel and distributed multiprocessor systems, the processors are connected based on different types of interconnection networks. The topological structure of an interconnection network is typically modeled as a graph. Among the many kinds of network topologies, the crossed cube is one of the most popular. In this paper, we investigate the panpositionable panconnectedness problem with respect to the crossed cube. A graph G is r-panpositionably panconnected if for any three distinct vertices x, y, z of G and for any integer \(l_1\) satisfying \(r \le l_1 \le |V(G)| - r - 1\), there exists a path \(P = [x, P_1, y, P_2, z]\) in G such that (i) \(P_1\) joins x and y with \(l(P_1) = l_1\) and (ii) \(P_2\) joins y and z with \(l(P_2) = l_2\) for any integer \(l_2\) satisfying \(r \le l_2 \le |V(G)| - l_1 - 1\), where |V(G)| is the total number of vertices in G and \(l(P_1)\) (respectively, \(l(P_2)\)) is the length of path \(P_1\) (respectively, \(P_2\)). By mathematical induction, we demonstrate that the n-dimensional crossed cube \(CQ_n\) is n-panpositionably panconnected. This result indicates that the path embedding of joining x and z with a mediate vertex y in \(CQ_n\) is extremely flexible. Moreover, applying our result, crossed cube problems such as panpositionable pancyclicity, panpositionably Hamiltonian connectedness, and panpositionable Hamiltonicity can be solved.  相似文献   

16.
Structural properties of u-constacyclic codes over the ring \({\mathbb {F}}_p+u{\mathbb {F}}_p\) are given, where p is an odd prime and \(u^2=1\). Under a special Gray map from \({\mathbb {F}}_p+u{\mathbb {F}}_p\) to \({\mathbb {F}}_p^2\), some new non-binary quantum codes are obtained by this class of constacyclic codes.  相似文献   

17.
This paper studies the problem of approximating a function f in a Banach space \(\mathcal{X}\) from measurements \(l_j(f)\), \(j=1,\ldots ,m\), where the \(l_j\) are linear functionals from \(\mathcal{X}^*\). Quantitative results for such recovery problems require additional information about the sought after function f. These additional assumptions take the form of assuming that f is in a certain model class \(K\subset \mathcal{X}\). Since there are generally infinitely many functions in K which share these same measurements, the best approximation is the center of the smallest ball B, called the Chebyshev ball, which contains the set \(\bar{K}\) of all f in K with these measurements. Therefore, the problem is reduced to analytically or numerically approximating this Chebyshev ball. Most results study this problem for classical Banach spaces \(\mathcal{X}\) such as the \(L_p\) spaces, \(1\le p\le \infty \), and for K the unit ball of a smoothness space in \(\mathcal{X}\). Our interest in this paper is in the model classes \(K=\mathcal{K}(\varepsilon ,V)\), with \(\varepsilon >0\) and V a finite dimensional subspace of \(\mathcal{X}\), which consists of all \(f\in \mathcal{X}\) such that \(\mathrm{dist}(f,V)_\mathcal{X}\le \varepsilon \). These model classes, called approximation sets, arise naturally in application domains such as parametric partial differential equations, uncertainty quantification, and signal processing. A general theory for the recovery of approximation sets in a Banach space is given. This theory includes tight a priori bounds on optimal performance and algorithms for finding near optimal approximations. It builds on the initial analysis given in Maday et al. (Int J Numer Method Eng 102:933–965, 2015) for the case when \(\mathcal{X}\) is a Hilbert space, and further studied in Binev et al. (SIAM UQ, 2015). It is shown how the recovery problem for approximation sets is connected with well-studied concepts in Banach space theory such as liftings and the angle between spaces. Examples are given that show how this theory can be used to recover several recent results on sampling and data assimilation.  相似文献   

18.
Let \(H_{1}, H_{2},\ldots ,H_{n}\) be separable complex Hilbert spaces with \(\dim H_{i}\ge 2\) and \(n\ge 2\). Assume that \(\rho \) is a state in \(H=H_1\otimes H_2\otimes \cdots \otimes H_n\). \(\rho \) is called strong-k-separable \((2\le k\le n)\) if \(\rho \) is separable for any k-partite division of H. In this paper, an entanglement witnesses criterion of strong-k-separability is obtained, which says that \(\rho \) is not strong-k-separable if and only if there exist a k-division space \(H_{m_{1}}\otimes \cdots \otimes H_{m_{k}}\) of H, a finite-rank linear elementary operator positive on product states \(\Lambda :\mathcal {B}(H_{m_{2}}\otimes \cdots \otimes H_{m_{k}})\rightarrow \mathcal {B}(H_{m_{1}})\) and a state \(\rho _{0}\in \mathcal {S}(H_{m_{1}}\otimes H_{m_{1}})\), such that \(\mathrm {Tr}(W\rho )<0\), where \(W=(\mathrm{Id}\otimes \Lambda ^{\dagger })\rho _{0}\) is an entanglement witness. In addition, several different methods of constructing entanglement witnesses for multipartite states are also given.  相似文献   

19.
We propose, analyze, and test a new MHD discretization which decouples the system into two Oseen problems at each timestep yet maintains unconditional stability with respect to the time step size, is optimally accurate in space, and behaves like second order in time in practice. The proposed method chooses a parameter \(\theta \in [0,1]\), dependent on the viscosity \(\nu \) and magnetic diffusivity \(\nu _m\), so that the explicit treatment of certain viscous terms does not cause instabilities, and gives temporal accuracy \(O(\Delta t^2 + (1-\theta )|\nu -\nu _m|\Delta t)\). In practice, \(\nu \) and \(\nu _m\) are small, and so the method behaves like second order. When \(\theta =1\), the method reduces to a linearized BDF2 method, but it has been proven by Li and Trenchea that such a method is stable only in the uncommon case of \(\frac{1}{2}< \frac{\nu }{\nu _m} < 2\). For the proposed method, stability and convergence are rigorously proven for appropriately chosen \(\theta \), and several numerical tests are provided that confirm the theory and show the method provides excellent accuracy in cases where usual BDF2 is unstable.  相似文献   

20.
We introduce two scheduling problems, the flexible bandwidth allocation problem (\(\textsc {FBAP}\)) and the flexible storage allocation problem (\(\textsc {FSAP}\)). In both problems, we have an available resource, and a set of requests, each consists of a minimum and a maximum resource requirement, for the duration of its execution, as well as a profit accrued per allocated unit of the resource. In \(\textsc {FBAP}\), the goal is to assign the available resource to a feasible subset of requests, such that the total profit is maximized, while in \(\textsc {FSAP}\) we also require that each satisfied request is given a contiguous portion of the resource. Our problems generalize the classic bandwidth allocation problem (BAP) and storage allocation problem (SAP) and are therefore \(\text {NP-hard}\). Our main results are a 3-approximation algorithm for \(\textsc {FBAP}\) and a \((3+\epsilon )\)-approximation algorithm for \(\textsc {FSAP}\), for any fixed \(\epsilon >0 \). These algorithms make nonstandard use of the local ratio technique. Furthermore, we present a \((2+\epsilon )\)-approximation algorithm for \(\textsc {SAP}\), for any fixed \(\epsilon >0 \), thus improving the best known ratio of \(\frac{2e-1}{e-1} + \epsilon \). Our study is motivated also by critical resource allocation problems arising in all-optical networks.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号