共查询到20条相似文献,搜索用时 31 毫秒
1.
The typechecking problem for transformations of relational data into tree data is the following: given a relational-to-XML
transformation P, and an XML type d, decide whether for every database instance
the result of the transformation P on
satisfies d. TreeQL programs with projection-free conjunctive queries (see Alon et al. in ACM Trans. Comput. Log. 4(3):315–354, 2003) are considered as transformations and DTDs with arbitrary regular expressions as XML types.
A non-elementary upper bound for the typechecking problem was already given by Alon et al. (ACM Trans. Comput. Log. 4(3):315–354,
2003) (although in a more general setting, where equality and negation in projection-free conjunctive queries and additional universal
integrity constraints are allowed).
In this paper we show that the typechecking problem is coNEXPTIME-complete.
As an intermediate step we consider the following problem, which can be formulated independently of XML notions. Given a set
of triples of the form (φ,k,j), where φ is a projection-free conjunctive query and k,j are natural numbers, decide whether there exists a database
such that, for each triple (φ,k,j) in the set, there exists a natural number α, such that there are exactly k+j*α tuples satisfying the query φ in
. Our main technical contribution consists of a NEXPTIME algorithm for the last problem.
Partially supported by Polish Ministry of Science and Higher Education research project N206 022 31/3660, 2006/2009.
This paper is an extended version of 20, where the coNEXPTIME upper bound was shown. 相似文献
2.
Fast Algorithms for the Density Finding Problem 总被引:1,自引:0,他引:1
We study the problem of finding a specific density subsequence of a sequence arising from the analysis of biomolecular sequences.
Given a sequence A=(a
1,w
1),(a
2,w
2),…,(a
n
,w
n
) of n ordered pairs (a
i
,w
i
) of numbers a
i
and width w
i
>0 for each 1≤i≤n, two nonnegative numbers ℓ, u with ℓ≤u and a number δ, the Density Finding Problem is to find the consecutive subsequence A(i
*,j
*) over all O(n
2) consecutive subsequences A(i,j) with width constraint satisfying ℓ≤w(i,j)=∑
r=i
j
w
r
≤u such that its density
is closest to δ. The extensively studied Maximum-Density Segment Problem is a special case of the Density Finding Problem with δ=∞. We show that the Density Finding Problem has a lower bound Ω(nlog n) in the algebraic decision tree model of computation. We give an algorithm for the Density Finding Problem that runs in optimal O(nlog n) time and O(nlog n) space for the case when there is no upper bound on the width of the sequence, i.e., u=w(1,n). For the general case, we give an algorithm that runs in O(nlog 2
m) time and O(n+mlog m) space, where
and w
min=min
r=1
n
w
r
. As a byproduct, we give another O(n) time and space algorithm for the Maximum-Density Segment Problem.
Grants NSC95-2221-E-001-016-MY3, NSC-94-2422-H-001-0001, and NSC-95-2752-E-002-005-PAE, and by the Taiwan Information Security
Center (TWISC) under the Grants NSC NSC95-2218-E-001-001, NSC95-3114-P-001-002-Y, NSC94-3114-P-001-003-Y and NSC 94-3114-P-011-001. 相似文献
3.
Given a sequenceA of lengthM and a regular expressionR of lengthP, an approximate regular expression pattern-matching algorithm computes the score of the optimal alignment betweenA and one of the sequencesB exactly matched byR. An alignment between sequencesA=a1a2 ... aM andB=b1b2... bN is a list of ordered pairs, (i1,j1), (i2j2), ..., (it,jtt) such that ik < ik+1 and jk < jk+1. In this case the alignmentaligns symbols aik and bjk, and leaves blocks of unaligned symbols, orgaps, between them. A scoring schemeS associates costs for each aligned symbol pair and each gap. The alignment's score is the sum of the associated costs, and an optimal alignment is one of minimal score. There are a variety of schemes for scoring alignments. In a concave gap penalty scoring schemeS={, w}, a function (a, b) gives the score of each aligned pair of symbolsa andb, and aconcave function w(k) gives the score of a gap of lengthk. A function w is concave if and only if it has the property that, for allk > 1, w(k + 1) –w(k) w(k) –w(k –1). In this paper we present an O(MP(logM + log2
P)) algorithm for approximate regular expression matching for an arbitrary and any concavew.
This work was supported in part by the National Institute of Health under Grant RO1 LM04960. 相似文献
4.
M. S. Barketau T. C. E. Cheng C. T. Ng Vladimir Kotov Mikhail Y. Kovalyov 《Journal of Scheduling》2008,11(1):17-28
In this paper we consider the problem of scheduling n jobs on a single machine, where the jobs are processed in batches and the processing time of each job is a step function
depending on its waiting time, which is the time between the start of the processing of the batch to which the job belongs
and the start of the processing of the job. For job i, if its waiting time is less than a given threshold value D, then it requires a basic processing time a
i
; otherwise, it requires an extended processing time a
i
+b
i
. The objective is to minimize the completion time of the last job. We first show that the problem is NP-hard in the strong
sense even if all b
i
are equal, it is NP-hard even if b
i
=a
i
for all i, and it is non-approximable in polynomial time with a constant performance guarantee Δ<3/2, unless
. We then present O(nlog n) and O(n
3F−1log n/F
F
) algorithms for the case where all a
i
are equal and for the case where there are F, F≥2, distinct values of a
i
, respectively. We further propose an O(n
2log n) approximation algorithm with a performance guarantee
for the general problem, where m
* is the number of batches in an optimal schedule. All the above results apply or can be easily modified for the corresponding
open-end bin packing problem. 相似文献
5.
Approximate string matching is about finding a given string pattern in a text by allowing some degree of errors. In this paper
we present a space efficient data structure to solve the 1-mismatch and 1-difference problems. Given a text T of length n over an alphabet A, we can preprocess T and give an
-bit space data structure so that, for any query pattern P of length m, we can find all 1-mismatch (or 1-difference) occurrences of P in O(|A|mlog log n+occ) time, where occ is the number of occurrences. This is the fastest known query time given that the space of the data structure is o(nlog 2
n) bits.
The space of our data structure can be further reduced to O(nlog |A|) with the query time increasing by a factor of log
ε
n, for 0<ε≤1. Furthermore, our solution can be generalized to solve the k-mismatch (and the k-difference) problem in O(|A|
k
m
k
(k+log log n)+occ) and O(log
ε
n(|A|
k
m
k
(k+log log n)+occ)) time using an
-bit and an O(nlog |A|)-bit indexing data structures, respectively. We assume that the alphabet size |A| is bounded by
for the
-bit space data structure. 相似文献
6.
Hazem M. Bahig 《The Journal of supercomputing》2008,43(1):99-104
In this paper, we study the merging of two sorted arrays
and
on EREW PRAM with two restrictions: (1) The elements of two arrays are taken from the integer range [1,n], where n=Max(n
1,n
2). (2) The elements are taken from either uniform distribution or non-uniform distribution such that
, for 1≤i≤p (number of processors). We give a new optimal deterministic algorithm runs in
time using p processors on EREW PRAM. For
; the running time of the algorithm is O(log (g)
n) which is faster than the previous results, where log (g)
n=log log (g−1)
n for g>1 and log (1)
n=log n. We also extend the domain of input data to [1,n
k
], where k is a constant.
相似文献
Hazem M. BahigEmail: |
7.
As is well known, a finite field
n
= GF(q
n
) can be described in terms of n × n matrices A over the field
= GF(q) such that their powers A
i
, i = 1, 2, ..., q
n
– 1, correspond to all nonzero elements of the field. It is proved that, for fields
n
of characteristic 2, such a matrix A can be chosen to be symmetric. Several constructions of field-representing symmetric matrices are given. These matrices A
i
together with the all-zero matrix can be considered as a
n
-linear matrix code in the rank metric with maximum rank distance d = n and maximum possible cardinality q
n
. These codes are called symmetric rank codes. In the vector representation, such codes are maximum rank distance (MRD) linear [n, 1, n] codes, which allows one to use known rank-error-correcting algorithms. For symmetric codes, an algorithm of erasure symmetrization is proposed, which considerably reduces the decoding complexity as compared with standard algorithms. It is also shown that a linear [n, k, d = n – k + 1] MRD code
k
containing the above-mentioned one-dimensional symmetric code as a subcode has the following property: the corresponding transposed code is also
n
-linear. Such codes have an extended capability of correcting symmetric errors and erasures. 相似文献
8.
Rahul Tripathi 《Theory of Computing Systems》2010,46(2):193-221
The 1-versus-2 queries problem, which has been extensively studied in computational complexity theory, asks in its generality whether every efficient
algorithm that makes at most 2 queries to a Σ
k
p
-complete language L
k
has an efficient simulation that makes at most 1 query to L
k
. We obtain solutions to this problem for hypotheses weaker than previously considered. We prove that:
Here, for any complexity class
C\mathcal{C}
and integer j≥1, we define
ZPPC[j]\mathrm{ZPP}^{\mathcal{C}[j]}
to be the class of problems solvable by zero-error randomized algorithms that run in polynomial time, make at most j queries to
C\mathcal{C}
, and succeed with probability at least 1/2+1/poly(⋅). This same definition of
ZPPC[j]\mathrm{ZPP}^{\mathcal{C}[j]}
, also considered in Cai and Chakaravarthy (J. Comb. Optim. 11(2):189–202, 2006), subsumes the class of problems solvable by randomized algorithms that always answer correctly in expected polynomial time
and make at most j queries to
C\mathcal{C}
.
Hemaspaandra, Hemaspaandra, and Hempel (SIAM J. Comput. 28(2):383–393, 1998), for k>2, and Buhrman and Fortnow (J. Comput. Syst. Sci. 59(2):182–194, 1999), for k=2, had obtained the same consequence as ours in (I) using the stronger hypothesis
PSpk[2]tt í PSpk[1]\mathrm{P}^{\Sigma^{p}_{k}[2]}_{tt}\subseteq \mathrm{P}^{\Sigma^{p}_{k}[1]}
. Fortnow, Pavan, and Sengupta (J. Comput. Syst. Sci. 74(3):358–363, 2008) had obtained the same consequence as ours in (II) using the stronger hypothesis P
tt
NP[2]⊆PNP[1]. 相似文献
(I) | For each k≥2, PSpk[2]tt í ZPPSpk[1]T PH=Spk\mathrm{P}^{\Sigma^{p}_{k}[2]}_{tt}\subseteq \mathrm{ZPP}^{\Sigma^{p}_{k}[1]}\Rightarrow \mathrm{PH}=\Sigma^{p}_{k} , and |
(II) | P tt NP[2]⊆ZPPNP[1] ⇒PH=S2 p . |
9.
Indexing of factors or substrings is a widely used and useful technique in stringology and can be seen as a tool in solving
diverse text algorithmic problems. A gapped-factor is a concatenation of a factor of length k, a gap of length d and another factor of length k′. Such a gapped factor is called a (k−d−k′)-gapped-factor. The problem of indexing the gapped-factors was considered recently by Peterlongo et al. (In: Stringology,
pp. 182–196, 2006). In particular, Peterlongo et al. devised a data structure, namely a gapped factor tree (GFT) to index the gapped-factors.
Given a text
of length n over the alphabet Σ and the values of the parameters k, d and k′, the construction of GFT requires O(n|Σ|) time. Once GFT is constructed, a given (k−d−k′)-gapped-factor can be reported in O(k+k′+Occ) time, where Occ is the number of occurrences of that factor in
. In this paper, we present a new improved indexing scheme for the gapped-factors. The improvements we achieve come from two
aspects. Firstly, we generalize the indexing data structure in the sense that, unlike GFT, it is independent of the parameters
k and k′. Secondly, our data structure can be constructed in O(nlog 1+ε
n) time and space, where 0<ε<1. The only price we pay is a slight increase, i.e. an additional log log n term, in the query time.
Preliminary version appeared in [29].
C.S. Iliopoulos is supported by EPSRC and Royal Society grants.
M.S. Rahman is supported by the Commonwealth Scholarship Commission in the UK under the Commonwealth Scholarship and Fellowship
Plan (CSFP).
M.S. Rahman is on leave from Department of CSE, BUET, Dhaka 1000, Bangladesh. 相似文献
10.
An axis-parallel k-dimensional box is a Cartesian product R 1×R 2×???×R k where R i (for 1≤i≤k) is a closed interval of the form [a i ,b i ] on the real line. For a graph G, its boxicity box?(G) is the minimum dimension k, such that G is representable as the intersection graph of (axis-parallel) boxes in k-dimensional space. The concept of boxicity finds applications in various areas such as ecology, operations research etc. A number of NP-hard problems are either polynomial time solvable or have much better approximation ratio on low boxicity graphs. For example, the max-clique problem is polynomial time solvable on bounded boxicity graphs and the maximum independent set problem for boxicity d graphs, given a box representation, has a $\lfloor 1+\frac{1}{c}\log n\rfloor^{d-1}An axis-parallel k-dimensional box is a Cartesian product R
1×R
2×⋅⋅⋅×R
k
where R
i
(for 1≤i≤k) is a closed interval of the form [a
i
,b
i
] on the real line. For a graph G, its boxicity box (G) is the minimum dimension k, such that G is representable as the intersection graph of (axis-parallel) boxes in k-dimensional space. The concept of boxicity finds applications in various areas such as ecology, operations research etc.
A number of NP-hard problems are either polynomial time solvable or have much better approximation ratio on low boxicity graphs.
For example, the max-clique problem is polynomial time solvable on bounded boxicity graphs and the maximum independent set
problem for boxicity d graphs, given a box representation, has a
?1+\frac1clogn?d-1\lfloor 1+\frac{1}{c}\log n\rfloor^{d-1}
approximation ratio for any constant c≥1 when d≥2. In most cases, the first step usually is computing a low dimensional box representation of the given graph. Deciding whether
the boxicity of a graph is at most 2 itself is NP-hard. 相似文献
11.
Consider a set of n advertisements (hereafter called ads) A ={A1,...,An} competing to be placed in a planning horizon which is divided into N time intervals called slots. An ad A
i is specified by its size s
i and frequency w
i. The size s
i represents the amount of space the ad occupies in a slot. Ad A
i is said to be scheduled if exactly w
i copies of A
i are placed in the slots subject to the restriction that a slot contains at most one copy of an ad. In this paper, we consider two problems. The MINSPACE problem minimizes the maximum fullness among all slots in a feasible schedule where the fullness of a slot is the sum of the sizes of ads assigned to the slot. For the MAXSPACE problem, in addition, we are given a common maximum fullness S for all slots. The total size of the ads placed in a slot cannot exceed S. The objective is to find a feasible schedule
of ads such that the total occupied slot space
is maximized. We examine the complexity status of both problems and provide heuristics with performance guarantees. 相似文献
12.
Rui J. P. de Figueiredo 《Soft Computing - A Fusion of Foundations, Methodologies and Applications》2008,12(3):265-273
A framework is presented for processing fuzzy sets for which the universe of discourse X = {x} is a separable Hilbert Space, which, in particular, may be a Euclidian Space. In a given application, X would constitute a feature space. The membership functions of sets in such X are then “membership functionals”, that is, mappings from a vector space to the real line. This paper considers the class
Φ of fuzzy sets A, the membership functionals μ
A
of which belong to a Reproducing Kernel Hilbert Space (RKHS) F(X) of bounded analytic functionals on X, and satisfy the constraint . These functionals can be expanded in abstract power series in x, commonly known as Volterra functional series in x. Because of the one-to-one relationship between the fuzzy sets A and their respective μ
A
, one can process the sets A as objects using their μ
A
as intermediaries. Thus the structure of the uncertainty present in the fuzzy sets can be processed in a vector space without
descending to the level of processing of vectors in the feature space as usually done in the literature in the field. Also,
the framework allows one to integrate human and machine judgments in the definition of fuzzy sets; and to use concepts analogous
to probabilistic concepts in assigning membership values to combinations of fuzzy sets. Some analytical and interpretive consequences
of this approach are presented and discussed. A result of particular interest is the best approximation of a membership functional μ
A
in F(X) based on interpolation on a training set {(v
i
, u
i
),i = 1, . . . , q} and under the positivity constraint. The optimal analytical solution comes out in the form of an Optimal Interpolative Neural
Network (OINN) proposed by the author in 1990 for best approximation of pattern classification systems in a F(X) space setting. An example is briefly described of an application of this approach to the diagnosis of Alzheimer’s disease. 相似文献
13.
In this paper we study the problem of scheduling a set of periodic tasks nonpreemptively in hard-real-time systems, where it is critical for all requests of the tasks to be processed in time. A taskT is characterized by itsarrival time A, itsperiod P, and itsexecution time C. Starting fromA, a new request ofT arrives in everyP units of time, requestingC units of processing time, and itsdeadline coincides with the arrival of the next request ofT. All requests must be processed nonpreemptively to meet their corresponding deadlines. We show that the problem of testing the feasibility of a given task set {T
1,T
2,,T
n} satisfyingP
1+1=ki pi, wherek
i; is an integer 1 for 1i n–1, on a single processor is NP-hard in the strong sense, even if all tasks have the same arrival time. For task sets satisfyingP
i+1=K Pi, whereK is an integer 2 for 1 i n–1 and all tasks have the same arrival time, we present linear-time (in the number of requests) optimal scheduling algorithms as well as linear-time (in the number of tasks, i.e.,n) algorithms for testing feasibility in both uniprocessor and multiprocessor systems. We also extend our results to more general task sets. 相似文献
14.
Achour Mostefaoui Sergio Rajsbaum Michel Raynal Corentin Travers 《Distributed Computing》2008,21(3):201-222
Solving agreement problems deterministically, such as consensus and k-set agreement, in asynchronous distributed systems prone to an unbounded number of process failures has been shown to be
impossible. To circumvent this impossibility, unreliable failure detectors for the crash failure model have been widely studied.
These are oracles that provide information on failures. The exact nature of such information is defined by a set of abstract
properties that a particular class of failure detectors satisfy. The weakest class of such failure detectors that allow to
solve consensus is Ω. This paper considers failure detector classes from the literature that solve k-set agreement in the crash failure model, and studies their relative power. It shows that the family of failure detector
classes (1 ≤ x ≤ n), and (0 ≤ y ≤ n), can be “added” to provide a failure detector of the class Ω
z
(1 ≤ z ≤ n, a generalization of Ω). It also characterizes the power of such an “addition”, namely, , can construct Ω
z
iff y + z > t, and can construct Ω
z
iff x + z > t + 1, where t is the maximum number of processes that can crash in a run. As an example, the paper shows that, while allows solving 2-set agreement (but not consensus) and allows solving t-set agreement (but not (t − 1)-set agreement), a system with failure detectors of both classes can solve consensus for any value of t. More generally, the paper studies the failure detector classes , and Ω
z
, and shows which reductions among these classes are possible and which are not. The paper also presents a message-passing
Ω
k
-based k-set agreement protocol and shows that Ω
k
is not enough to solve (k − 1)-set agreement. In that sense, it can be seen as a step toward the characterization of the weakest failure detector class
that allows solving the k-set agreement problem.
An extended abstract of this paper has appeared in the proceedings of PODC 2006 [20]. This work has been supported partially
by a grant from LAFMI (Franco-Mexican Lab in Computer Science), the European Network of Excellence ReSIST and PAPIIT-UNAM. 相似文献
15.
Some Hamiltonians are constructed from the unitary
\checkRi,i+1(q, j){\check{R}_{i,i+1}(\theta, \varphi)}-matrices, where θ and j{\varphi} are time-independent parameters. We show that the entanglement sudden death (ESD) can happen in these closed Yang–Baxter
systems. It is found that the ESD is not only sensitive to the initial condition, but also has a great connection with different
Yang–Baxter systems. Especially, we find that the meaningful parameter j{\varphi} has a great influence on ESD. 相似文献
16.
A traveling salesman game is a cooperative game
. Here N, the set of players, is the set of cities (or the vertices of the complete graph) and c
D
is the characteristic function where D is the underlying cost matrix. For all S⊆N, define c
D
(S) to be the cost of a minimum cost Hamiltonian tour through the vertices of S∪{0} where
is called as the home city. Define Core
as the core of a traveling salesman game
. Okamoto (Discrete Appl. Math. 138:349–369, [2004]) conjectured that for the traveling salesman game
with D satisfying triangle inequality, the problem of testing whether Core
is empty or not is
-hard. We prove that this conjecture is true. This result directly implies the
-hardness for the general case when D is asymmetric. We also study approximately fair cost allocations for these games. For this, we introduce the cycle cover
games and show that the core of a cycle cover game is non-empty by finding a fair cost allocation vector in polynomial time.
For a traveling salesman game, let
and ∀
S⊆N, x(S)≤ε⋅c
D
(S)} be an ε-approximate core, for a given ε>1. By viewing an approximate fair cost allocation vector for this game as a sum of exact fair cost allocation vectors of
several related cycle cover games, we provide a polynomial time algorithm demonstrating the non-emptiness of the log 2(|N|−1)-approximate core by exhibiting a vector in this approximate core for the asymmetric traveling salesman game. We improve
it further by finding a
-approximate core in polynomial time for some constant c. We also show that there exists an ε
0>1 such that it is
-hard to decide whether ε
0-Core
is empty or not.
A preliminary version of the paper appeared in the third Workshop on Approximation and Online Algorithms (WAOA), 2005. 相似文献
17.
Given m facilities each with an opening cost, n demands, and distance between every demand and facility, the Facility Location problem finds a solution which opens some facilities to connect every demand to an opened facility such that the total cost of the solution is minimized. The k-Facility Location problem further requires that the number of opened facilities is at most k, where k is a parameter given in the instance of the problem. We consider the Facility Location problems satisfying that for every demand the ratio of the longest distance to facilities and the shortest distance to facilities is at most ω, where ω is a predefined constant. Using the local search approach with scaling technique and error control technique, for any arbitrarily small constant > 0, we give a polynomial-time approximation algorithm for the ω-constrained Facility Location problem with approximation ratio 1 + ω + 1 + ε, which significantly improves the previous best known ratio (ω + 1)/α for some 1≤α≤2, and a polynomial-time approximation algorithm for the ω-constrained k- Facility Location problem with approximation ratio ω+1+ε. On the aspect of approximation hardness, we prove that unless NP■DTIME(nO(loglogn)), the ω-constrained Facility Location problem cannot be approximated within 1 + lnω - 1, which slightly improves the previous best known hardness result 1.243 + 0.316ln(ω - 1). The experimental results on the standard test instances of Facility Location problem show that our algorithm also has good performance in practice. 相似文献
18.
Jean Frédéric Myoupo Aboubecrine Ould Cheikhna Idrissa Sow 《The Journal of supercomputing》2010,52(2):135-148
The Distributed Mobility-Adaptive Clustering (DMAC) due to Basagni partitions the nodes of a mobile ad hoc network into clusters,
thus giving the network a hierarchical organization. This algorithm supports the mobility of the nodes, even during the cluster
formation. The main feature of DMAC is that in a weighted network (in which two or more nodes cannot have the same weight),
nodes have to choose the clusterheads taking into account only the node weight, i.e. the mobility when a node weight is the
inverse of its speed. In our approach many nodes may have the same speed and hence the same weight. We assume that nodes have
no identities and the number of nodes, say n, is the only known parameter of the network. After the randomized clustering, we show that the initialization problem can
be solved in a multi-hop ad hoc wireless network of n stations in O(k
1/2log 1/2
k)+D
b
−1+O(log (max (P
i
)+log 2max (P
i
)) broadcast rounds with high probability, where k is the number of clusters, D
b
is the blocking diameter and max (P
i
), 1≤i≤k, is the maximum number of nodes in a cluster. Thus the initialization protocol presented here uses less broadcast rounds
than the one in Ravelemanana (IEEE Trans. Parallel Distributed Syst. 18(1):17–28 2007). 相似文献
19.
An instance of the path hitting problem consists of two families of paths,
and ℋ, in a common undirected graph, where each path in ℋ is associated with a non-negative cost. We refer to
and ℋ as the sets of demand and hitting paths, respectively. When p∈ℋ and
share at least one mutual edge, we say that p
hits q. The objective is to find a minimum cost subset of ℋ whose members collectively hit those of
. In this paper we provide constant factor approximation algorithms for path hitting, confined to instances in which the underlying
graph is a tree, a spider, or a star. Although such restricted settings may appear to be very simple, we demonstrate that
they still capture some of the most basic covering problems in graphs. Our approach combines several novel ideas: We extend
the algorithm of Garg, Vazirani and Yannakakis (Algorithmica, 18:3–20, 1997) for approximate multicuts and multicommodity flows in trees to prove new integrality properties; we present a reduction
that involves multiple calls to this extended algorithm; and we introduce a polynomial-time solvable variant of the edge cover
problem, which may be of independent interest.
An extended abstract of this paper appeared in Proceedings of the 14th Annual European Symposium on Algorithms, 2006.
This work is part of D. Segev’s Ph.D. thesis prepared at Tel-Aviv University under the supervision of Prof. Refael Hassin. 相似文献