首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This study tackles the image color to gray conversion problem. The aim was to understand the conversion qualities that can improve the accuracy of results when the grayscale conversion is applied as a pre-processing step in the context of vision algorithms, and in particular dense stereo matching. We evaluated many different state of the art color to grayscale conversion algorithms. We also propose an ad-hoc adaptation of the most theoretically promising algorithm, which we call Multi-Image Decolorize (MID). This algorithm comes from an in-depth analysis of the existing conversion solutions and consists of a multi-image extension of the algorithm by Grundland and Dodgson (The decolorize algorithm for contrast enhancing, color to grayscale conversion, Tech. Rep. UCAM-CL-TR-649, University of Cambridge, 2005) which is based on predominant component analysis. In addition, two variants of this algorithm have been proposed and analyzed: one with standard unsharp masking and another with a chromatic weighted unsharp masking technique (Nowak and Baraniuk in IEEE Trans Image Process 7(7):1068–1074, 1998) which enhances the local contrast as shown in the approach by Smith et al. (Comput Graph Forum 27(2), 2008). We tested the relative performances of this conversion with respect to many other solutions, using the StereoMatcher test suite (Scharstein and Szeliski in Int J Comput Vis 47(1–3):7–42, 2002) with a variety of different datasets and different dense stereo matching algorithms. The results show that the overall performance of the proposed MID conversion are good and the reported tests provided useful information and insights on how to design color to gray conversion to improve matching performance. We also show some interesting secondary results such as the role of standard unsharp masking vs. chromatic unsharp masking in improving correspondence matching.  相似文献   

2.
Computing the duplication history of a tandem repeated region is an important problem in computational biology (Fitch in Genetics 86:623–644, 1977; Jaitly et al. in J. Comput. Syst. Sci. 65:494–507, 2002; Tang et al. in J. Comput. Biol. 9:429–446, 2002). In this paper, we design a polynomial-time approximation scheme (PTAS) for the case where the size of the duplication block is 1. Our PTAS is faster than the previously best PTAS in Jaitly et al. (J. Comput. Syst. Sci. 65:494–507, 2002). For example, to achieve a ratio of 1.5, our PTAS takes O(n 5) time while the PTAS in Jaitly et al. (J. Comput. Syst. Sci. 65:494–507, 2002) takes O(n 11) time. We also design a ratio-6 polynomial-time approximation algorithm for the case where the size of each duplication block is at most 2. This is the first polynomial-time approximation algorithm with a guaranteed ratio for this case. Part of work was done during a Z.-Z. Chen visit at City University of Hong Kong.  相似文献   

3.
Weighted timed automata (WTA), introduced in Alur et al. (Proceedings of HSCC’01, LNCS, vol. 2034, pp. 49–62, Springer, Berlin, 2001), Behrmann et al. (Proceedings of HSCC’01, LNCS, vol. 2034, pp. 147–161, Springer, Berlin, 2001) are an extension of Alur and Dill (Theor. Comput. Sci. 126(2):183–235, 1994) timed automata, a widely accepted formalism for the modelling and verification of real time systems. Weighted timed automata extend timed automata by allowing costs on the locations and edges. There has been a lot of interest Bouyer et al. (Inf. Process. Lett. 98(5):188–194, 2006), Bouyer et al. (Log. Methods Comput. Sci. 4(2):9, 2008), Brihaye et al. (Proceedings of FORMATS/FTRTFT’04, LNCS, vol. 3253, pp. 277–292, Springer, Berlin, 2004), Brihaye et al. (Inf. Comput. 204(3):408–433, 2006) in studying the model checking problem of weighted timed automata. The properties of interest are written using logic weighted CTL (WCTL), an extension of CTL with costs. It has been shown Bouyer et al. (Log. Methods Comput. Sci. 4(2):9, 2008) that the problem of model checking WTAs with a single clock using WCTL with no external cost variables is decidable, while 3 clocks render the problem undecidable Bouyer et al. (Inf. Process. Lett. 98(5):188–194, 2006). The question of 2 clocks is open. In this paper, we introduce a subclass of weighted timed automata called weighted integer reset timed automata (WIRTA) and study the model checking problem. We give a clock reduction technique for WIRTA. Given a WIRTA A\mathcal{A} with n≥1 clocks, we show that a single clock WIRTA A¢\mathcal{A}' preserving the paths and costs of A\mathcal{A} can be obtained. This gives us the decidability of model checking WIRTA with n≥1 clocks and m≥1 costs using WCTL with no external cost variables. We then show that for a restricted version of WCTL with external cost variables, the model checking problem is undecidable for WIRTA with 3 stopwatch costs and 1 clock. Finally, we show that model checking WTA with 2 clocks and 1 stopwatch cost against WCTL with no external cost variables is undecidable, thereby answering a question that has remained long open.  相似文献   

4.
N. Kharrat  Z. Mghazli 《Calcolo》2012,49(1):41-61
We present a posteriori-residual analysis for the approximate time-dependent Stokes model Chorin-Temam projection scheme (Chorin in Math. Comput. 23:341–353, 1969; Temam in Arch. Ration. Mech. Appl. 33:377–385, 1969). Based on the multi-step approach introduced in Bergam et al. (Math. Comput. 74(251):1117–1138, 2004), we derive error estimators, with respect to both time and space approximations, related to diffusive and incompressible parts of Stokes equations. Using a conforming finite element discretization, we prove the equivalence between error and estimators under specific conditions.  相似文献   

5.
The weighted essentially non-oscillatory (WENO) methods are a popular high-order spatial discretization for hyperbolic partial differential equations. Recently Henrick et al. (J. Comput. Phys. 207:542–567, 2005) noted that the fifth-order WENO method by Jiang and Shu (J. Comput. Phys. 126:202–228, 1996) is only third-order accurate near critical points of the smooth regions in general. Using a simple mapping function to the original weights in Jiang and Shu (J. Comput. Phys. 126:202–228, 1996), Henrick et al. developed a mapped WENO method to achieve the optimal order of accuracy near critical points. In this paper we study the mapped WENO scheme and find that, when it is used for solving the problems with discontinuities, the mapping function in Henrick et al. (J. Comput. Phys. 207:542–567, 2005) may amplify the effect from the non-smooth stencils and thus cause a potential loss of accuracy near discontinuities. This effect may be difficult to be observed for the fifth-order WENO method unless a long time simulation is desired. However, if the mapping function is applied to seventh-order WENO methods (Balsara and Shu in J. Comput. Phys. 160:405–452, 2000), the error can increase much faster so that it can be observed with a moderate output time. In this paper a new mapping function is proposed to overcome this potential loss of accuracy.  相似文献   

6.
In this paper we provide the full spectral decomposition of the Multi-Class Lighthill Whitham Richards (MCLWR) traffic models described in (Wong et al. in Transp. Res. Part A 36:827–841, 2002; Benzoni-Gavage and Colombo in Eur. J. Appl. Math. 14:587–612, 2003). Even though the eigenvalues of these models can only be found numerically, the knowledge of the spectral structure allows the use of characteristic-based High Resolution Shock Capturing (HRSC) schemes. We compare the characteristic-based approach to the component-wise schemes used in (Zhang et al. in J. Comput. Phys. 191:639–659, 2003), and propose two strategies to minimize the oscillatory behavior that can be observed when using the component-wise approach.  相似文献   

7.
Coordination has been recognized by many researchers as the most important feature of multi-agent systems. Coordination is defined as managing interdependencies amongst activities (Malone and Crowston in ACM Comput. Surv. 26(1):87–119, 1994). The traditional approach of implementing a coordination mechanism is to hard-wire it into a coordination system at design time. However, in dynamic and open environments, many attributes of the system cannot be accurately identified at the design time. Therefore, dynamic coordination, capable of coordinating activities at run-time, has emerged. On the other hand, a successful dynamic coordination model for multi-agent systems requires knowledge sharing as well as common vocabulary. Therefore, an ontological approach is an appropriate way in proposing dynamic coordination models for multi-agent systems. In this paper, an Ontology-Driven Dynamic Coordination Model (O-DC) for Multiagent-Based Mobile Workforce Brokering Systems (MWBS) (Mousavi et al. in Int. J. Comput. Sci. 6:(5):557–565, 2010; Mousavi et al. in Proceedings of 4th IEEE international symposium on information technology, ITSim’10, Kuala Lumpur, Malaysia, 15–17 June 2010, vol. 3, pp. 1416–1421, 2010; Mousavi and Nordin in Proceedings of the IEEE international conference on electrical engineering and informatics, Bandung, Indonesia, 17–19 June 2007, pp. 294–297, 2007) is proposed and formulated. Subsequently, the applicability of O-DC is examined via simulation based on a real-world scenario.  相似文献   

8.
We study the complexity issues for Walrasian equilibrium in a special case of combinatorial auction, called single-minded auction, in which every participant is interested in only one subset of commodities. Chen et al. (J. Comput. Syst. Sci. 69(4): 675–687, 2004) showed that it is NP-hard to decide the existence of a Walrasian equilibrium for a single-minded auction and proposed a notion of approximate Walrasian equilibrium called relaxed Walrasian equilibrium. We show that every single-minded auction has a relaxed Walrasian equilibrium that satisfies at least two-thirds of the participants, proving a conjecture posed in Chen et al. (J. Comput. Syst. Sci. 69(4): 675–687, 2004). Motivated by practical considerations, we introduce another concept of approximate Walrasian equilibrium called weak Walrasian equilibrium. We show NP-completeness and hardness of approximation results for weak Walrasian equilibria. In search of positive results, we restrict our attention to the tollbooth problem (Guruswami et al. in Proceedings of the Symposium on Discrete Algorithms (SODA), pp. 1164–1173, 2005), where every participant is interested in a single path in some underlying graph. We give a polynomial time algorithm to determine the existence of a Walrasian equilibrium and compute one (if it exists), when the graph is a tree. However, the problem is still NP-hard for general graphs.  相似文献   

9.
The steel mill slab design problem from the CSPLIB is a combinatorial optimization problem motivated by an application of the steel industry. It has been widely studied in the constraint programming community. Several methods were proposed to solve this problem. A steel mill slab library was created which contains 380 instances. A closely related binpacking problem called the multiple knapsack problem with color constraints, originated from the same industrial problem, was discussed in the integer programming community. In particular, a simple integer program for this problem has been given by Forrest et al. (INFORMS J Comput 18:129–134, 2006). The aim of this paper is to bring these different studies together. Moreover, we adapt the model of Forrest et al. (INFORMS J Comput 18:129–134, 2006) for the steel mill slab design problem. Using this model and a state-of-the-art integer program solver all instances of the steel mill slab library can be solved efficiently to optimality. We improved, thereby, the solution values of 76 instances compared to previous results (Schaus et al., Constraints 16:125–147, 2010). Finally, we consider a recently introduced variant of the steel mill slab design problem, where within all solutions which minimize the leftover one is interested in a solution which requires a minimum number of slabs. For that variant we introduce two approaches and solve all instances of the steel mill slab library with this slightly changed objective function to optimality.  相似文献   

10.
The problem of maximization of the depth of penetration of rigid impactor into semi-infinite solid media (concrete shield) is investigated analytically and numerically using two-stage model and experimental data of Forrestal and Tzou (Int J Solids Struct 34(31–32):4127–4146, 1997). The shape of the axisymmetric rigid impactor has been taken as an unknown design variable. To solve the formulated optimization problem for nonadditive functional, we expressed the depth of penetration (DOP) under some isoperimetric constraints. We apply approaches based on analytical and qualitative variational methods and numerical optimization algorithm of global search. Basic attention for considered optimization problem was given to constraints on the mass of penetrated bodies, expressed by the volume in the case of penetrated solid body and by the surface area in the case of penetrated thin-walled rigid shell. As a result of performed investigation, based on two-term and three-term two stage models proposed by Forrestal et al. (Int J Impact Eng 15(4):396–405, 1994), Forrestal and Tzou (Int J Solids Struct 34(31–32):4127–4146, 1997) and effectively developed by Ben-Dor et al. (Comp Struct 56:243–248, 2002, Comput Struct 81(1):9–14, 2003a, Int J Solids Struct 40(17):4487–4500, 2003b, Mech Des Struct Mach 34(2): 139–156, 2006), we found analytical and numerical solutions and analyzed singularities of optimal forms.  相似文献   

11.
Using Biologically Inspired Features for Face Processing   总被引:1,自引:0,他引:1  
In this paper, we show that a new set of visual features, derived from a feed-forward model of the primate visual object recognition pathway proposed by Riesenhuber and Poggio (R&P Model) (Nature Neurosci. 2(11):1019–1025, 1999) is capable of matching the performance of some of the best current representations for face identification and facial expression recognition. Previous work has shown that the Riesenhuber and Poggio Model features can achieve a high level of performance on object recognition tasks (Serre, T., et al. in IEEE Comput. Vis. Pattern Recognit. 2:994–1000, 2005). Here we modify the R&P model in order to create a new set of features useful for face identification and expression recognition. Results from tests on the FERET, ORL and AR datasets show that these features are capable of matching and sometimes outperforming other top visual features such as local binary patterns (Ahonen, T., et al. in 8th European Conference on Computer Vision, pp. 469–481, 2004) and histogram of gradient features (Dalal, N., Triggs, B. in International Conference on Computer Vision & Pattern Recognition, pp. 886–893, 2005). Having a model based on shared lower level features, and face and object recognition specific higher level features, is consistent with findings from electrophysiology and functional magnetic resonance imaging experiments. Thus, our model begins to address the complete recognition problem in a biologically plausible way.  相似文献   

12.
This paper studies vehicle routing problems on asymmetric metrics. Our starting point is the directed k-TSP problem: given an asymmetric metric (V,d), a root rV and a target k≤|V|, compute the minimum length tour that contains r and at least k other vertices. We present a polynomial time O(\fraclog2 nloglogn·logk)O(\frac{\log^{2} n}{\log\log n}\cdot\log k)-approximation algorithm for this problem. We use this algorithm for directed k-TSP to obtain an O(\fraclog2 nloglogn)O(\frac{\log^{2} n}{\log\log n})-approximation algorithm for the directed orienteering problem. This answers positively, the question of poly-logarithmic approximability of directed orienteering, an open problem from Blum et al. (SIAM J. Comput. 37(2):653–670, 2007). The previously best known results were quasi-polynomial time algorithms with approximation guarantees of O(log 2 k) for directed k-TSP, and O(log n) for directed orienteering (Chekuri and Pal in IEEE Symposium on Foundations in Computer Science, pp. 245–253, 2005). Using the algorithm for directed orienteering within the framework of Blum et al. (SIAM J. Comput. 37(2):653–670, 2007) and Bansal et al. (ACM Symposium on Theory of Computing, pp. 166–174, 2004), we also obtain poly-logarithmic approximation algorithms for the directed versions of discounted-reward TSP and vehicle routing problem with time-windows.  相似文献   

13.
The potential flow equations which govern the free-surface motion of an ideal fluid (the water wave problem) are notoriously difficult to solve for a number of reasons. First, they are a classical free-boundary problem where the domain shape is one of the unknowns to be found. Additionally, they are strongly nonlinear (with derivatives appearing in the nonlinearity) without a natural dissipation mechanism so that spurious high-frequency modes are not damped. In this contribution we address the latter of these difficulties using a surface formulation (which addresses the former complication) supplemented with physically-motivated viscous effects recently derived by Dias et al. (Phys. Lett. A 372:1297–1302, 2008). The novelty of our approach is to derive a weakly nonlinear model from the surface formulation of Zakharov (J. Appl. Mech. Tech. Phys. 9:190–194, 1968) and Craig and Sulem (J. Comput. Phys. 108:73–83, 1993), complemented with the viscous effects mentioned above. Our new model is simple to implement while being both faithful to the physics of the problem and extremely stable numerically.  相似文献   

14.
Accurate estimation of the breast skin-line is an important prerequisite for both enhancement and analysis of mammograms for computer-aided detection of breast cancer. In our proposed system, an initial estimate of the skin-line is first computed using a combination of adaptive thresholding and connected-component analysis. This skin-line is susceptible to errors in the top and bottom portions of the breast region. Using the observation that the Euclidean distance from the edge of the stroma to the actual skin-line is usually uniform, we develop a novel dependency approach for estimating the skin-line boundary of the breast. In the proposed dependency approach, the constraints are first developed between the stroma edge and the initial skin-line boundary using the Euclidean distance. These constraints are then propagated to estimate the upper and lower skin-line portions. We evaluated the performance of our skin-line estimation algorithm by comparing the estimated boundary with respect to the ground-truth boundary drawn by an expert radiologist. We adapted a metrics for error measurement: the polyline distance measure (PDM). As part of our protocol, we compared the results of our dependency approach methodology with those of a deformable model strategy (proposed by Ferrari et al. in Med Biol Eng Comput 42(2):210–208, 2004). On a dataset of 82 images from the MIAS database, the dependency approach yielded a mean error (μ) of 3.28 pixels with a standard deviation (σ) of 2.17 pixels using the PDM. In comparison, the deformable model strategy (Ferrari et al. in Med Biol Eng Comput 42(2):210–208, 2004) yielded μ = 4.92 pixels with σ = 1.91 pixels. The improvement is statistically significant. The results are clinically relevant, according to the radiologists who evaluated the results.  相似文献   

15.
In this paper we consider the p-ary transitive reduction (TR p ) problem where p>0 is an integer; for p=2 this problem arises in inferring a sparsest possible (biological) signal transduction network consistent with a set of experimental observations with a goal to minimize false positive inferences even if risking false negatives. Special cases of TR p have been investigated before in different contexts; the best previous results are as follows:
(1)  The minimum equivalent digraph problem, that correspond to a special case of TR1 with no critical edges, is known to be MAX-SNP-hard, admits a polynomial time algorithm with an approximation ratio of 1.617+ε for any constant ε>0 (Chiu and Liu in Sci. Sin. 4:1396–1400, 1965) and can be solved in linear time for directed acyclic graphs (Aho et al. in SIAM J. Comput. 1(2):131–137, 1972).
(2)  A 2-approximation algorithm exists for TR1 (Frederickson and JàJà in SIAM J. Comput. 10(2):270–283, 1981; Khuller et al. in 19th Annual ACM-SIAM Symposium on Discrete Algorithms, pp. 937–938, 1999).
In this paper, our contributions are as follows:
•  We observe that TR p , for any integer p>0, can be solved in linear time for directed acyclic graphs using the ideas in Aho et al. (SIAM J. Comput. 1(2):131–137, 1972).
•  We provide a 1.78-approximation for TR1 that improves the 2-approximation mentioned in (2) above.
•  We provide a 2+o(1)-approximation for TR p on general graphs for any fixed prime p>1.
R. Albert’s research was partly supported by a Sloan Research Fellowship in Science and Technology. B. DasGupta’s research was partly supported by NSF grants DBI-0543365, IIS-0612044 and IIS-0346973. E. Sontag’s research was partly supported by NSF grants EIA 0205116 and DMS-0504557.  相似文献   

16.
This paper explores interoperability for data represented using the Graph Annotation Framework (GrAF) (Ide and Suderman, 2007) and the data formats utilized by two general-purpose annotation systems: the General Architecture for Text Engineering (GATE) (Cunningham et al., 2002) and the Unstructured Information Management Architecture (UIMA) (Ferrucci and Lally in Nat Lang Eng 10(3–4):327–348, 2004). GrAF is intended to serve as a “pivot” to enable interoperability among different formats, and both GATE and UIMA are at least implicitly designed with an eye toward interoperability with other formats and tools. We describe the steps required to perform a round-trip rendering from GrAF to GATE and GrAF to UIMA CAS and back again, and outline the commonalities as well as the differences and gaps that came to light in the process.  相似文献   

17.
Registration is a technique nowadays commonly used in medical imaging. A drawback of most of the current registration schemes is that all tissue is being considered as non-rigid (Staring et al., Proceedings of the SPIE 2006, vol. 6144, pp. 1–10, 2006). Therefore, rigid objects in an image, such as bony structures or surgical instruments, may be transformed non-rigidly. In this paper, we integrate the concept of local rigidity to the FLexible Image Registration Toolbox (FLIRT) (Haber and Modersitzki, in SIAM J. Sci. Comput. 27(5):1594–1607, 2006; Modersitzki, Numerical Methods for Image Registration, 2004). The idea is to add a penalty for local non-rigidity to the cost function and thus to penalize non-rigid transformations of rigid objects. As our examples show, the new approach allows the maintenance of local rigidity in the desired fashion. For example, the new scheme can keep bony structures rigid during registration. We show, how the concept of local rigidity can be integrated in the FLIRT approach and present the variational backbone, a proper discretization, and a multilevel optimization scheme. We compare the FLIRT approach to the B-spline approach. As expected from the more general setting of the FLIRT approach, our examples demonstrate that the FLIRT results are superior: much smoother, smaller deformations, visually much more pleasing.  相似文献   

18.
We study web caching with request reordering. The goal is to maintain a cache of web documents so that a sequence of requests can be served at low cost. To improve cache hit rates, a limited reordering of requests is allowed. Feder et al. (Proceedings of the 13th ACM–SIAM Symposium on Discrete Algorithms, pp. 104–105, 2002), who recently introduced this problem, considered caches of size 1, i.e. a cache can store one document. They presented an offline algorithm based on dynamic programming as well as online algorithms that achieve constant factor competitive ratios. For arbitrary cache sizes, Feder et al. (Theor. Comput. Sci. 324:201–218, 2004) gave online strategies that have nearly optimal competitive ratios in several cost models.  相似文献   

19.
We provide new bounds for the worst case approximation ratio of the classic Longest Processing Time (Lpt) heuristic for related machine scheduling (Q||C max?). For different machine speeds, Lpt was first considered by Gonzalez et al. (SIAM J. Comput. 6(1):155–166, 1977). The best previously known bounds originate from more than 20 years back: Dobson (SIAM J. Comput. 13(4):705–716, 1984), and independently Friesen (SIAM J. Comput. 16(3):554–560, 1987) showed that the worst case ratio of Lpt is in the interval (1.512,1.583), and in (1.52,1.67), respectively. We tighten the upper bound to $1+\sqrt{3}/3\approx1.5773We provide new bounds for the worst case approximation ratio of the classic Longest Processing Time (Lpt) heuristic for related machine scheduling (Q||C max ). For different machine speeds, Lpt was first considered by Gonzalez et al. (SIAM J. Comput. 6(1):155–166, 1977). The best previously known bounds originate from more than 20 years back: Dobson (SIAM J. Comput. 13(4):705–716, 1984), and independently Friesen (SIAM J. Comput. 16(3):554–560, 1987) showed that the worst case ratio of Lpt is in the interval (1.512,1.583), and in (1.52,1.67), respectively. We tighten the upper bound to 1+?3/3 ? 1.57731+\sqrt{3}/3\approx1.5773 , and the lower bound to 1.54. Although this improvement might seem minor, we consider the structure of potential lower bound instances more systematically than former works. We present a scheme for a job-exchanging process, which, repeated any number of times, gradually increases the lower bound. For the new upper bound, this systematic method together with a new idea of introducing fractional jobs, facilitated a proof that is surprisingly simple, relative to the result. We present the upper-bound proof in parameterized terms, which leaves room for further improvements.  相似文献   

20.
Borodin et al. (Algorithmica 37(4):295–326, 2003) gave a model of greedy-like algorithms for scheduling problems and Angelopoulos and Borodin (Algorithmica 40(4):271–291, 2004) extended their work to facility location and set cover problems. We generalize their model to include other optimization problems, and apply the generalized framework to graph problems. Our goal is to define an abstract model that captures the intrinsic power and limitations of greedy algorithms for various graph optimization problems, as Borodin et al. (Algorithmica 37(4):295–326, 2003) did for scheduling. We prove bounds on the approximation ratio achievable by such algorithms for basic graph problems such as shortest path, weighted vertex cover, Steiner tree, and independent set. For example, we show that, for the shortest path problem, no algorithm in the FIXED priority model can achieve any approximation ratio (even one dependent on the graph size), but the well-known Dijkstra’s algorithm is an optimal ADAPTIVE priority algorithm. We also prove that the approximation ratio for weighted vertex cover achievable by ADAPTIVE priority algorithms is exactly 2. Here, a new lower bound matches the known upper bounds (Johnson in J. Comput. Syst. Sci. 9(3):256–278, 1974). We give a number of other lower bounds for priority algorithms, as well as a new approximation algorithm for minimum Steiner tree problem with weights in the interval [1,2]. S. Davis’ research supported by NSF grants CCR-0098197, CCR-0313241, and CCR-0515332. Views expressed are not endorsed by the NSF. R. Impagliazzo’s research supported by NSF grant CCR-0098197, CCR-0313241, and CCR-0515332. Views expressed are not endorsed by the NSF. Some work done while at the Institute for Advanced Study, supported by the State of New Jersey.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号