首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 171 毫秒
1.
X-ray computed tomography (CT) has been playing an important role in diagnostic of cancer and radiotherapy. However, high imaging dose added to healthy organs during CT scans is a serious clinical concern. Imaging dose in CT scans can be reduced by reducing the number of X-ray projections. In this paper, we consider 2D CT reconstructions using very small number of projections. Some regularization based reconstruction methods have already been proposed in the literature for such task, like the total variation (TV) based reconstruction (Sidky and Pan in Phys. Med. Biol. 53:4777, 2008; Sidky et al. in J. X-Ray Sci. Technol. 14(2):119–139, 2006; Jia et al. in Med. Phys. 37:1757, 2010; Choi et al. in Med. Phys. 37:5113, 2010) and balanced approach with wavelet frame based regularization (Jia et al. in Phys. Med. Biol. 56:3787–3807, 2011). For most of the existing methods, at least 40 projections is usually needed to get a satisfactory reconstruction. In order to keep radiation dose as minimal as possible, while increase the quality of the reconstructed images, one needs to enhance the resolution of the projected image in the Radon domain without increasing the total number of projections. The goal of this paper is to propose a CT reconstruction model with wavelet frame based regularization and Radon domain inpainting. The proposed model simultaneously reconstructs a high quality image and its corresponding high resolution measurements in Radon domain. In addition, we discovered that using the isotropic wavelet frame regularization proposed in Cai et al. (Image restorations: total variation, wavelet frames and beyond, 2011, preprint) is superior than using its anisotropic counterpart. Our proposed model, as well as other models presented in this paper, is solved rather efficiently by split Bregman algorithm (Goldstein and Osher in SIAM J. Imaging Sci. 2(2):323–343, 2009; Cai et al. in Multiscale Model. Simul. 8(2):337–369, 2009). Numerical simulations and comparisons will be presented at the end.  相似文献   

2.
This paper analyzes the role of common data problems when identifying structural breaks in small samples. Most notably, we survey small sample properties of the most commonly applied endogenous break tests developed by Brown et al. (J R Stat Soc B 37:149–163, 1975) and Zeileis (Stat Pap 45(1):123–131, 2004), Nyblom (J Am Stat Assoc 84(405):223–230, 1989) and Hansen (J Policy Model 14(4):517–533, 1992), and Andrews et al. (J Econ 70(1):9–38, 1996). Power and size properties are derived using Monte Carlo simulations. We find that the Nyblom test is on par with the commonly used F type tests in a small sample in terms of power. While the Nyblom test’s power decreases if the structural break occurs close to the margin of the sample, it proves far more robust to nonnormal distributions of the error term that are found to matter strongly in small samples although being irrelevant asymptotically for all tests that are analyzed in this paper.  相似文献   

3.
This work contrasts Giovanni Sartor’s view of inferential semantics of legal concepts (Sartor in Artif Intell Law 17:217–251, 2009) with a probabilistic model of theory formation (Kemp et al. in Cognition 114:165–196, 2010). The work further explores possibilities of implementing Kemp’s probabilistic model of theory formation in the context of mapping legal concepts between two individual legal systems. For implementing the legal concept mapping, we propose a cross-categorization approach that combines three mathematical models: the Bayesian Model of Generalization (BMG; Tenenbaum and Griffiths in Behav Brain Sci 4:629–640, 2001), the probabilistic model of theory formation, i.e., the Infinite Relational Model (IRM) first introduced by Kemp et al. (The twenty-first national conference on artificial intelligence, 2006, Cognition 114:165–196, 2010) and its extended model, i.e., the normal-IRM (n-IRM) proposed by Herlau et al. (IEEE International Workshop on Machine Learning for Signal Processing, 2012). We apply our cross-categorization approach to datasets where legal concepts related to educational systems are respectively defined by the Japanese- and the Danish authorities according to the International Standard Classification of Education. The main contribution of this work is the proposal of a conceptual framework of the cross-categorization approach that, inspired by Sartor (Artif Intell Law 17:217–251, 2009), attempts to explain reasoner’s inferential mechanisms.  相似文献   

4.
In this paper decision variables for the key-frame detection problem in a video are evaluated using statistical tools derived from the theory of design of experiments. The pixel-by-pixel intensity difference of consecutive video frames is used as the factor or decision variable for designing an experiment for key-frame detection. The determination of a key-frame is correlated with the different values of the factor. A novel concept of meaningfulness of a video key-frame is also introduced to select the representative key-frame from a set of possible key-frames. The use of the concepts of design of experiments and the meaningfulness property to summarize a video is tested using a number of videos taken from MUSCLE-VCD-2007 dataset. The performance of the proposed approach in detecting key-frames is found to be superior in comparison to the competing approaches like PME based method (Liu et al., IEEE Trans Circuits Syst Video Technol 13(10):1006–1013, 2003; Mukherjee et al., IEEE Trans Circuits Syst Video Technol 17(5):612–620, 2007; Panagiotakis et al., IEEE Trans Circuits Syst Video Technol 19(3):447–451, 2009).  相似文献   

5.
The class ${\mathcal{SLUR}}$ (Single Lookahead Unit Resolution) was introduced in Schlipf et al. (Inf Process Lett 54:133–137, 1995) as an umbrella class for efficient (poly-time) SAT solving, with linear-time SAT decision, while the recognition problem was not considered. ?epek et al. (2012) and Balyo et al. (2012) extended this class in various ways to hierarchies covering all of CNF (all clause-sets). We introduce a hierarchy ${\mathcal{SLUR}}_k$ which we argue is the natural “limit” of such approaches. The second source for our investigations is the class ${\mathcal{UC}}$ of unit-refutation complete clause-sets, introduced in del Val (1994) as a target class for knowledge compilation. Via the theory of “hardness” of clause-sets as developed in Kullmann (1999), Kullmann (Ann Math Artif Intell 40(3–4):303–352, 2004) and Ansótegui et al. (2008) we obtain a natural generalisation ${\mathcal{UC}}_k$ , containing those clause-sets which are “unit-refutation complete of level k”, which is the same as having hardness at most k. Utilising the strong connections to (tree-)resolution complexity and (nested) input resolution, we develop basic methods for the determination of hardness (the level k in ${\mathcal{UC}}_k$ ). A fundamental insight now is that ${\mathcal{SLUR}}_k = {\mathcal{UC}}_k$ holds for all k. We can thus exploit both streams of intuitions and methods for the investigations of these hierarchies. As an application we can easily show that the hierarchies from ?epek et al. (2012) and Balyo et al. (2012) are strongly subsumed by ${\mathcal{SLUR}}_k$ . Finally we consider the problem of “irredundant” clause-sets in ${\mathcal{UC}}_k$ . For 2-CNF we show that strong minimisations are possible in polynomial time, while already for (very special) Horn clause-sets minimisation is NP-complete. We conclude with an extensive discussion of open problems and future directions. We envisage the concepts investigated here to be the starting point for a theory of good SAT translations, which brings together the good SAT-solving aspects from ${\mathcal{SLUR}}$ together with the knowledge-representation aspects from ${\mathcal{UC}}$ , and expands this combination via notions of “hardness”.  相似文献   

6.
Combining the block transmission in Long and Liu (Phys Rev A 65:032302, 2002) and the double operations in Lin et al. (Opt Commun 282:4455, 2009), we propose a secure multiparty quantum secret sharing protocol with the collective eavesdropping-check character. In this protocol, only the boss needs to prepare Bell states and perform Bell state measurements, and all agents only perform local operations, which makes this protocol more feasible with the current technique. Incidentally, we show that the other half of secret messages in Lin et al. protocol (Opt Commun 282:4455, 2009) may also be eavesdropped.  相似文献   

7.
In this paper we provide improved approximation algorithms for the Min-Max Tree Cover and Bounded Tree Cover problems. Given a graph G=(V,E) with weights w:E→?+, a set T 1,T 2,…,T k of subtrees of G is called a tree cover of G if $V=\bigcup_{i=1}^{k} V(T_{i})$ . In the Min-Max k-tree Cover problem we are given graph G and a positive integer k and the goal is to find a tree cover with k trees, such that the weight of the largest tree in the cover is minimized. We present a 3-approximation algorithm for this improving the two different approximation algorithms presented in Arkin et al. (J. Algorithms 59:1–18, 2006) and Even et al. (Oper. Res. Lett. 32(4):309–315, 2004) with ratio 4. The problem is known to have an APX-hardness lower bound of $\frac{3}{2}$ (Xu and Wen in Oper. Res. Lett. 38:169–173, 2010). In the Bounded Tree Cover problem we are given graph G and a bound λ and the goal is to find a tree cover with minimum number of trees such that each tree has weight at most λ. We present a 2.5-approximation algorithm for this, improving the 3-approximation bound in Arkin et al. (J. Algorithms 59:1–18, 2006).  相似文献   

8.
9.
Reachability and shortest path problems are NL-complete for general graphs. They are known to be in L for graphs of tree-width 2 (Jakoby and Tantau in Proceedings of FSTTCS’07: The 27th Annual Conference on Foundations of Software Technology and Theoretical Computer Science, pp. 216–227, 2007). In this paper, we improve these bounds for k-trees, where k is a constant. In particular, the main results of our paper are log-space algorithms for reachability in directed k-trees, and for computation of shortest and longest paths in directed acyclic k-trees. Besides the path problems mentioned above, we also consider the problem of deciding whether a k-tree has a perfect matching (decision version), and if so, finding a perfect matching (search version), and prove that these two problems are L-complete. These problems are known to be in P and in RNC for general graphs, and in SPL for planar bipartite graphs, as shown in Datta et al. (Theory Comput. Syst. 47:737–757, 2010). Our results settle the complexity of these problems for the class of k-trees. The results are also applicable for bounded tree-width graphs, when a tree-decomposition is given as input. The technique central to our algorithms is a careful implementation of the divide-and-conquer approach in log-space, along with some ideas from Jakoby and Tantau (Proceedings of FSTTCS’07: The 27th Annual Conference on Foundations of Software Technology and Theoretical Computer Science, pp. 216–227, 2007) and Limaye et al. (Theory Comput. Syst. 46(3):499–522, 2010).  相似文献   

10.
We study two-stage robust variants of combinatorial optimization problems on undirected graphs, like Steiner tree, Steiner forest, and uncapacitated facility location. Robust optimization problems, previously studied by Dhamdhere et al. (Proc. of 46th Annual IEEE Symposium on Foundations of Computer Science (FOCS’05), pp. 367–378, 2005), Golovin et al. (Proc. of the 23rd Annual Symposium on Theoretical Aspects of Computer Science (STACS), 2006), and Feige et al. (Proc. of the 12th International Integer Programming and Combinatorial Optimization Conference, pp. 439–453, 2007), are two-stage planning problems in which the requirements are revealed after some decisions are taken in Stage 1. One has to then complete the solution, at a higher cost, to meet the given requirements. In the robust k-Steiner tree problem, for example, one buys some edges in Stage 1. Then k terminals are revealed in Stage 2 and one has to buy more edges, at a higher cost, to complete the Stage 1 solution to build a Steiner tree on these terminals. The objective is to minimize the total cost under the worst-case scenario. In this paper, we focus on the case of exponentially many scenarios given implicitly. A scenario consists of any subset of k terminals (for k-Steiner tree), or any subset of k terminal-pairs (for k-Steiner forest), or any subset of k clients (for facility location). Feige et al. (Proc. of the 12th International Integer Programming and Combinatorial Optimization Conference, pp. 439–453, 2007) give an LP-based general framework for approximation algorithms for a class of two stage robust problems. Their framework cannot be used for network design problems like k-Steiner tree (see later elaboration). Their framework can be used for the robust facility location problem, but gives only a logarithmic approximation. We present the first constant-factor approximation algorithms for the robust k-Steiner tree (with exponential number of scenarios) and robust uncapacitated facility location problems. Our algorithms are combinatorial and are based on guessing the optimum cost and clustering to aggregate nearby vertices. For the robust k-Steiner forest problem on trees and with uniform multiplicative increase factor for Stage 2 (also known as inflation), we present a constant approximation. We show APX-hardness of the robust min-cut problem (even with singleton-set scenarios), resolving an open question of (Dhamdhere et al. in Proc. of 46th Annual IEEE Symposium on Foundations of Computer Science (FOCS’05), pp. 367–378, 2005) and (Golovin et al. in Proc. of the 23rd Annual Symposium on Theoretical Aspects of Computer Science (STACS), 2006).  相似文献   

11.
We extend the result of Zhang et al. (J Fuzzy Math 14:53, 2006), who discussed the finite fuzzy relation equations with max–min and max–prod composition. In this article, the $\text{max-}*$ composition is used for wide family of operations $*$ . In particular, families of solutions of two relation equations are compared.  相似文献   

12.
Wavelet frame based models for image restoration have been extensively studied for the past decade (Chan et al. in SIAM J. Sci. Comput. 24(4):1408–1432, 2003; Cai et al. in Multiscale Model. Simul. 8(2):337–369, 2009; Elad et al. in Appl. Comput. Harmon. Anal. 19(3):340–358, 2005; Starck et al. in IEEE Trans. Image Process. 14(10):1570–1582, 2005; Shen in Proceedings of the international congress of mathematicians, vol. 4, pp. 2834–2863, 2010; Dong and Shen in IAS lecture notes series, Summer program on “The mathematics of image processing”, Park City Mathematics Institute, 2010). The success of wavelet frames in image restoration is mainly due to their capability of sparsely approximating piecewise smooth functions like images. Most of the wavelet frame based models designed in the past are based on the penalization of the ? 1 norm of wavelet frame coefficients, which, under certain conditions, is the right choice, as supported by theories of compressed sensing (Candes et al. in Appl. Comput. Harmon. Anal., 2010; Candes et al. in IEEE Trans. Inf. Theory 52(2):489–509, 2006; Donoho in IEEE Trans. Inf. Theory 52:1289–1306, 2006). However, the assumptions of compressed sensing may not be satisfied in practice (e.g. for image deblurring and CT image reconstruction). Recently in Zhang et al. (UCLA CAM Report, vol. 11-32, 2011), the authors propose to penalize the ? 0 “norm” of the wavelet frame coefficients instead, and they have demonstrated significant improvements of their method over some commonly used ? 1 minimization models in terms of quality of the recovered images. In this paper, we propose a new algorithm, called the mean doubly augmented Lagrangian (MDAL) method, for ? 0 minimizations based on the classical doubly augmented Lagrangian (DAL) method (Rockafellar in Math. Oper. Res. 97–116, 1976). Our numerical experiments show that the proposed MDAL method is not only more efficient than the method proposed by Zhang et al. (UCLA CAM Report, vol. 11-32, 2011), but can also generate recovered images with even higher quality. This study reassures the feasibility of using the ? 0 “norm” for image restoration problems.  相似文献   

13.
The last decade has seen an explosion in the number of people learning English as a second language (ESL). In China alone, it is estimated to be over 300 million (Yang in Engl Today 22, 2006). Even in predominantly English-speaking countries, the proportion of non-native speakers can be very substantial. For example, the US National Center for Educational Statistics reported that nearly 10 % of the students in the US public school population speak a language other than English and have limited English proficiency (National Center for Educational Statistics (NCES) in Public school student counts, staff, and graduate counts by state: school year 2000–2001, 2002). As a result, the last few years have seen a rapid increase in the development of NLP tools to detect and correct grammatical errors so that appropriate feedback can be given to ESL writers, a large and growing segment of the world’s population. As a byproduct of this surge in interest, there have been many NLP research papers on the topic, a Synthesis Series book (Leacock et al. in Automated grammatical error detection for language learners. Synthesis lectures on human language technologies. Morgan Claypool, Waterloo 2010), a recurring workshop (Tetreault et al. in Proceedings of the NAACL workshop on innovative use of NLP for building educational applications (BEA), 2012), and a shared task competition (Dale et al. in Proceedings of the seventh workshop on building educational applications using NLP (BEA), pp 54–62, 2012; Dale and Kilgarriff in Proceedings of the European workshop on natural language generation (ENLG), pp 242–249, 2011). Despite this growing body of work, several issues affecting the annotation for and evaluation of ESL error detection systems have received little attention. In this paper, we describe these issues in detail and present our research on alleviating their effects.  相似文献   

14.
Weighted essentially non-oscillatory (WENO) finite difference schemes, developed by Liu et al. (Comput Phys 115(1):200–212, 1994) and improved by Jiang and Shu (Comput Phys 126(1):202–228, 1996), are one of the most popular methods to approximate the solutions of hyperbolic equations. But these schemes fail to provide maximal order accuracy near smooth extrema, where the first derivative of the solution becomes zero. Some authors have addressed this problem with different weight designs. In this paper we focus on the weights proposed by Yamaleev and Carpenter (J Comput Phys 228:4248–4272, 2009). They propose new weights to provide faster weight convergence than those presented in Borges et al. (J Comput Phys 227:3191–3211, 2008) and deduce some constraints on the weights parameters to guarantee that the WENO scheme has maximal order for sufficiently smooth solutions with an arbitrary number of vanishing derivatives. We analyze the scheme with the weights proposed in Yamaleev and Carpenter (J Comput Phys 228:4248–4272, 2009) and prove that near discontinuities it achieves worse orders than classical WENO schemes. In order to solve these accuracy problems, we define new weights, based on those proposed in Yamaleev and Carpenter (J Comput Phys 228:4248–4272, 2009), and get some constraints on the weights parameters to guarantee maximal order accuracy for the resulting schemes.  相似文献   

15.
We present a high-order accurate scheme for the reinitialization equation of Sussman et al.(J. Comput. Phys. 114:146–159, [1994]) that guarantees accurate computation of the interface’s curvatures in the context of level set methods. This scheme is an extension of the work of Russo and Smereka (J. Comput. Phys. 163:51–67, [2000]). We present numerical results in two and three spatial dimensions to demonstrate fourth-order accuracy for the reinitialized level set function, third-order accuracy for the normals and second-order accuracy for the interface’s mean curvature in the L 1- and L -norms. We also exploit the work of Min and Gibou (UCLA CAM Report (06-22), [2006]) to show second-order accurate scheme for the computation of the mean curvature on non-graded adaptive grids.  相似文献   

16.
This work addresses the problem of fault detection and diagnosis (FDD) for a quad-rotor mini air vehicle (MAV). Actuator faults are considered on this paper. The basic idea behind the proposed method is to estimate the faults signals using the extended state observers theory. To estimate the faults, a polynomial observer (Aguilar et al. 2011; Mata-Machuca et al., Commun Nonlinear Sci Numer Simul 15(12):4114–4130, 2010, BioSystems 100(1):65–69, 2010) is presented by using the available measurements and know inputs of the system. In order to investigate the diagnosability properties of the system, a differential algebra approach is proposed (Cruz-Victoria et al., J Frankl Inst 345(2):102–118, 2008; and Martinez-Guerra and Diop, IEE P-Contr Theor Ap 151(1):130–135, 2004). The effectiveness of the methodology is illustrated by means of numerical simulations.  相似文献   

17.
The Parameterized Complexity of Unique Coverage and Its Variants   总被引:1,自引:0,他引:1  
In this paper we study the parameterized complexity of the Unique Coverage problem, a variant of the classic Set Cover problem. This problem admits several parameterizations and we show that all, except the standard parameterization and a generalization of it, are unlikely to be fixed-parameter tractable. We use results from extremal combinatorics to obtain the best-known kernel for Unique Coverage and the well-known color-coding technique of Alon et al. (J. ACM 42(4), 844–856, 1995) to show that a weighted version of this problem is fixed-parameter tractable. Our application of color-coding uses an interesting variation of s-perfect hash families called (k,s)-hash families which were studied by Alon et al. (J. Comb. Theory Ser. A 104(1), 207–215, 2003) in the context of a class of codes called parent identifying codes (Barg et al. in SIAM J. Discrete Math. 14(3), 423–431, 2001). To the best of our knowledge, this is the first application of (k,s)-hash families outside the domain of coding theory. We prove the existence of such families of size smaller than the best-known s-perfect hash families using the probabilistic method (Alon and Spencer in The Probabilistic Method, Wiley, New York, 2000). Explicit constructions of such families of size promised by the probabilistic method is open.  相似文献   

18.
We study the exact controllability, by a reduced number of controls, of coupled cascade systems of PDE’s and the existence of exact insensitizing controls for the scalar wave equation. We give a necessary and sufficient condition for the observability of abstract-coupled cascade hyperbolic systems by a single observation, the observation operator being either bounded or unbounded. Our proof extends the two-level energy method introduced in Alabau-Boussouira (Siam J Control Opt 42:871–906, 2003) and Alabau-Boussouira and Léautaud (J Math Pures Appl 99:544–576, 2013) for symmetric coupled systems, to cascade systems which are examples of non-symmetric coupled systems. In particular, we prove the observability of two coupled wave equations in cascade if the observation and coupling regions both satisfy the Geometric Control Condition (GCC) of Bardos et al. (SIAM J Control Opt 30:1024–1065, 1992). By duality, this solves the exact controllability, by a single control, of $2$ -coupled abstract cascade hyperbolic systems. Using transmutation, we give null-controllability results for the multidimensional heat and Schrödinger $2$ -coupled cascade systems under GCC and for any positive time. By our method, we can treat cases where the control and coupling coefficients have disjoint supports, partially solving an open question raised by de Teresa (CPDE 25:39–72, 2000). Moreover we answer the question of the existence of exact insensitizing locally distributed as well as boundary controls of scalar multidimensional wave equations, raised by Lions (Actas del Congreso de Ecuaciones Diferenciales y Aplicaciones (CEDYA), Universidad de Málaga, pp 43–54, 1989) and later on by Dáger (Siam J Control Opt 45:1758–1768, 2006) and Tebou (C R Acad Sci Paris 346(Sér I):407–412, 2008).  相似文献   

19.
In this paper, a compressed membership problem for finite automata, both deterministic (DFAs) and non-deterministic (NFAs), with compressed transition labels is studied. The compression is represented by straight-line programs (SLPs), i.e. context-free grammars generating exactly one string. A novel technique of dealing with SLPs is employed: the SLPs are recompressed, so that substrings of the input word are encoded in SLPs labelling the transitions of the NFA (DFA) in the same way, as in the SLP representing the input text. To this end, the SLPs are locally decompressed and then recompressed in a uniform way. Furthermore, in order to reflect the recompression in the NFA, we need to modify it only a little, in particular its size stays polynomial in the input size. Using this technique it is shown that the compressed membership for NFA with compressed labels is in NP, thus confirming the conjecture of Plandowski and Rytter (Jewels Are Forever, pp. 262–272, Springer, Berlin, 1999) and extending the partial result of Lohrey and Mathissen (in CSR, LNCS, vol. 6651, pp. 275–288, Springer, Berlin, 2011); as this problem is known to be NP-hard (in Plandowski and Rytter, Jewels Are Forever, pp. 262–272, Springer, Berlin, 1999), we settle its exact computational complexity. Moreover, the same technique applied to the compressed membership for DFA with compressed labels yields that this problem is in P, and this problem is known to be P-hard (in Markey and Schnoebelen, Inf. Process. Lett. 90(1):3–6, 2004; Beaudry et al., SIAM J. Comput. 26(1):138–152, 1997).  相似文献   

20.
Danvy??s functional unparsing problem (Danvy in J. Funct. Program. 8(6), 621?C625, 1998) is to implement a type-safe ??printf?? function, which converts a sequence of heterogeneous arguments to a string according to a given format. The dual problem is to implement a type-safe ??scanf?? function, which extracts a sequence of heterogeneous arguments from a string by interpreting (Friedman and Wand in LFP, pp. 348?C355, 1984 and in Essentials of Programming Languages, MIT Press, 2008) the same format as an equally heterogeneous sequence of patterns that binds zero or more variables. We derive multiple solutions to both problems (Wand in J. ACM 27(1), 164?C180, 1980) from their formal specifications (Wand in Theor. Comput. Sci. 20(1), 3?C32, 1982). On one hand, our solutions show how the Hindley-Milner type system, unextended, permits accessing heterogeneous sequences with the static assurance of type safety. On the other hand, our solutions demonstrate the use of control operators (Felleisen et al. in Proceedings of the 1988 ACM Conference on Lisp and Functional Programming, pp. 52?C62, ACM Press, New York, 1988; Wand in POPL 85: Conference Record of the Annual ACM Symposium on Principles of Programming Languages, vol. 16, ACM Press, New York, 1985; Meyer and Wand in Logics of Programs, Lecture Notes in Computer Science, vol. 193, pp. 219?C224, Springer, Berlin, 1985) to communicate with formats as coroutines (Wand in Proceedings of the 1980 ACM Conference on Lisp and Functional Programming, vol. 12, pp. 285?C299, ACM Press, New York, 1980 and Haynes et al. in LFP, pp. 293?C298, 1984).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号