首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
影响力最大化问题的目标是寻找社交网络中一组种子结点集合,在给定的传播模型下,使得这些结点最终传播的影响范围最大。Kempe和Kleinberg提出的贪心算法可以获得很好的影响范围,但是因复杂度太高而并不适用于大型社交网络。Chen和Yuan等人基于线性阈值(LT)模型提出了构造局部有向无环图的启发式算法,但是LT模型只考虑了邻居结点的直接影响力,忽略了结点之间存在的间接影响力。因此,在LT模型的基础上,结合网络中结点之间存在的间接影响力,提出了LT+影响力模型,并利用构造局部有向无环图的启发式算法求解LT+模型的影响力最大化,称为LT+DAG算法。真实数据集上的对比实验表明,LT+DAG算法具有更好的影响范围以及较好的可扩展性。  相似文献   

2.
Wireless sensor networks (WSNs), one of the commercial wireless mesh networks (WMNs), are envisioned to provide an effective solution for sensor-based AmI (Ambient Intelligence) systems and applications. To enable the communications between AmI sensor networks and the most popular TCP/IP networks seamlessly, the best solution model is to run TCP/IP directly on WSNs (Mulligan et al. 2009; Hui and Culler 2008; Han and Mam 2007; Kim et al. 2007; Xiaohua et al. 2004; Dunkels et al. 2004; Dunkels et al. 2004; Dunkels 2001; Dunkels et al. 2004). In this case, an IP assignment method is required to assign each sensor node a unique IP address. SIPA (Dunkels et al. 2004) is the best known IP assignment method that uses spatial relations and locations of sensor nodes to assign their IP addresses. It has been applied in Contiki (Dunkels et al. 2004), a famous WSN operating system, to support the 6LowPAN protocol. In Chang et al. (2009), we proposed the SLIPA (Scan-Line IP Assignment) algorithm to improve the assignment success rate (ASR) obtained by SIPA. SLIPA can achieve a good ASR when sensor nodes are uniformly distributed. However, if sensor nodes are deployed by other distributions, the improvements would be limited. This paper proposes a new spatial IP assignment method, called SLIPA-Q (SLIPA with equal-quantity partition), to improve SLIPA. Experiments show that, by testing the proposed method 1,000 times with 1,000 randomly deployed sensor nodes, the average ASR obtained by SLIPA-Q is over two times of that obtained by SLIPA. Under the same 88% ASR, the average numbers of sensor nodes those can be successfully assigned by SLIPA-Q, SLIPA, and SIPA are 950, 850, and 135, respectively. Comparing to previous spatial IP assignment methods, SLIPA-Q can achieve dramatic improvements in ASR for assigning IP addresses to a large set of sensor nodes.  相似文献   

3.
We revisit from a fairness point of view the problem of online load balancing in the restricted assignment model and the 1-∞ model. We consider both a job-centric and a machine-centric view of fairness, as proposed by Goel et al. (In: Symposium on discrete algorithms, pp. 384–390, 2005). These notions are equivalent to the approximate notion of prefix competitiveness proposed by Kleinberg et al. (In: Proceedings of the 40th annual symposium on foundations of computer science, p. 568, 2001), as well as to the notion of approximate majorization, and they generalize the well studied notion of max-min fairness. We resolve a question posed by Goel et al. (In: Symposium on discrete algorithms, pp. 384–390, 2005) proving that the greedy strategy is globally O(log?m)-fair, where m denotes the number of machines. This result improves upon the analysis of Goel et al. (In: Symposium on discrete algorithms, pp. 384–390, 2005) who showed that the greedy strategy is globally O(log?n)-fair, where n is the number of jobs. Typically, n?m, and therefore our improvement is significant. Our proof matches the known lower bound for the problem with respect to the measure of global fairness. The improved bound is obtained by analyzing, in a more accurate way, the more general restricted assignment model studied previously in Azar et al. (J. Algorithms 18:221–237, 1995). We provide an alternative bound which is not worse than the bounds of Azar et al. (J. Algorithms 18:221–237, 1995), and it is strictly better in many cases. The bound we prove is, in fact, much more general and it bounds the load on any prefix of most loaded machines. As a corollary from this more general bound we find that the greedy algorithm results in an assignment that is globally O(log?m)-balanced. The last result generalizes the previous result of Goel et al. (In: Symposium on discrete algorithms, pp. 384–390, 2005) who proved that the greedy algorithm yields an assignment that is globally O(log?m)-balanced for the 1-∞ model.  相似文献   

4.
倪静  秦斌 《计算机应用研究》2021,38(8):2340-2343,2375
在影响力最大化问题中,针对启发式方法精度不足和贪婪方法时间过载的问题,提出一种基于多属性决策方法的影响力最大化算法.首先,从社会网络节点的影响传播、节点之间的影响重叠和节点的信任度角度选取节点的重要性评价指标.然后,建立基于熵权TOPSIS的社会网络节点重要性评价模型,通过模型选择影响范围最广、与当前种子集的重叠最小且信任度最高的节点.最后,构建算法,并通过实验验证算法的性能.实验结果表明,与传统影响力最大化算法相比,所提算法在传播范围与时间效率上取得了较好的折中.  相似文献   

5.
Numerous sophisticated local algorithm were suggested in the literature for various fundamental problems. Notable examples are the MIS and $(\Delta +1)$ -coloring algorithms by Barenboim and Elkin (Distrib Comput 22(5–6):363–379, 2010), by Kuhn (2009), and by Panconesi and Srinivasan (J Algorithms 20(2):356–374, 1996), as well as the $O\mathopen {}(\Delta ^2)$ -coloring algorithm by Linial (J Comput 21:193, 1992). Unfortunately, most known local algorithms (including, in particular, the aforementioned algorithms) are non-uniform, that is, local algorithms generally use good estimations of one or more global parameters of the network, e.g., the maximum degree $\Delta $ or the number of nodes $n$ . This paper provides a method for transforming a non-uniform local algorithm into a uniform one. Furthermore, the resulting algorithm enjoys the same asymptotic running time as the original non-uniform algorithm. Our method applies to a wide family of both deterministic and randomized algorithms. Specifically, it applies to almost all state of the art non-uniform algorithms for MIS and Maximal Matching, as well as to many results concerning the coloring problem (In particular, it applies to all aforementioned algorithms). To obtain our transformations we introduce a new distributed tool called pruning algorithms, which we believe may be of independent interest.  相似文献   

6.
In this document, we present an alternative to the method introduced by Ebner (Pattern Recognit 60–67, 2003; J Parallel Distrib Comput 64(1):79–88, 2004; Color constancy using local color shifts, pp 276–287, 2004; Color Constancy, 2007; Mach Vis Appl 20(5):283–301, 2009) for computing the local space average color. We show that when the problem is framed as a linear system and the resulting series is solved, there is a solution based on LU decomposition that reduces the computing time by at least an order of magnitude.  相似文献   

7.
In this paper, we present a memetic algorithm (MA) for solving the uncapacitated single allocation hub location problem (USAHLP). Two efficient local search heuristics are designed and implemented in the frame of an evolutionary algorithm in order to improve both the location and allocation part of the problem. Computational experiments, conducted on standard CAB/AP hub data sets (Beasley in J Global Optim 8:429–433, 1996) and modified AP data set with reduced fixed costs (Silva and Cunha in Computer Oper Res 36:3152–3165, 2009), show that the MA approach is superior over existing heuristic approaches for the USAHLP. For several large-scale AP instances up to 200 nodes, the MA improved the best-known solutions from the literature until now. Numerical results on instances with 300 and 400 nodes introduced in Silva and Cunha (Computer Oper Res 36:3152–3165, 2009) show significant improvements in the sense of both solution quality and CPU time. The robustness of the MA was additionally tested on a challenging set of newly generated large-scale instances with 520–900 nodes. To the best of our knowledge, these are the largest USAHLP problem dimensions solved in the literature until now. In addition, in this paper, we report for the first time optimal solutions for 30 AP and modified AP instances.  相似文献   

8.
The TreeRank algorithm was recently proposed in [1] and [2] as a scoring-based method based on recursive partitioning of the input space. This tree induction algorithm builds orderings by recursively optimizing the Receiver Operating Characteristic curve through a one-step optimization procedure called LeafRank. One of the aim of this paper is the in-depth analysis of the empirical performance of the variants of TreeRank/LeafRank method. Numerical experiments based on both artificial and real data sets are provided. Further experiments using resampling and randomization, in the spirit of bagging and random forests are developed [3, 4] and we show how they increase both stability and accuracy in bipartite ranking. Moreover, an empirical comparison with other efficient scoring algorithms such as RankBoost and RankSVM is presented on UCI benchmark data sets.  相似文献   

9.
社会网络中影响最大化问题是指在特定传播模型下,对于给定的值,寻找具有最大影响范围的节点集,这是一个组合优化问题,Kempe等人已经证明该问题是NP-hard问题,其研究在理论和现实应用中都具有重大意义。文中提出一种新的影响最大化算法——有重叠社区划分的影响最大化算法(K-clique Heuristic算法),该算法的思路是在现实社会网络中跨越多个社交圈子的节点的传播领域越广,其交叉性更强、传播范围更广、影响力更大。所提算法与已有典型算法有相近的运行结果,且有更好的现实应用性和可解释性,为这项具有挑战性的研究提供了新的思路和方法。  相似文献   

10.
Wavelet frame based models for image restoration have been extensively studied for the past decade (Chan et al. in SIAM J. Sci. Comput. 24(4):1408–1432, 2003; Cai et al. in Multiscale Model. Simul. 8(2):337–369, 2009; Elad et al. in Appl. Comput. Harmon. Anal. 19(3):340–358, 2005; Starck et al. in IEEE Trans. Image Process. 14(10):1570–1582, 2005; Shen in Proceedings of the international congress of mathematicians, vol. 4, pp. 2834–2863, 2010; Dong and Shen in IAS lecture notes series, Summer program on “The mathematics of image processing”, Park City Mathematics Institute, 2010). The success of wavelet frames in image restoration is mainly due to their capability of sparsely approximating piecewise smooth functions like images. Most of the wavelet frame based models designed in the past are based on the penalization of the ? 1 norm of wavelet frame coefficients, which, under certain conditions, is the right choice, as supported by theories of compressed sensing (Candes et al. in Appl. Comput. Harmon. Anal., 2010; Candes et al. in IEEE Trans. Inf. Theory 52(2):489–509, 2006; Donoho in IEEE Trans. Inf. Theory 52:1289–1306, 2006). However, the assumptions of compressed sensing may not be satisfied in practice (e.g. for image deblurring and CT image reconstruction). Recently in Zhang et al. (UCLA CAM Report, vol. 11-32, 2011), the authors propose to penalize the ? 0 “norm” of the wavelet frame coefficients instead, and they have demonstrated significant improvements of their method over some commonly used ? 1 minimization models in terms of quality of the recovered images. In this paper, we propose a new algorithm, called the mean doubly augmented Lagrangian (MDAL) method, for ? 0 minimizations based on the classical doubly augmented Lagrangian (DAL) method (Rockafellar in Math. Oper. Res. 97–116, 1976). Our numerical experiments show that the proposed MDAL method is not only more efficient than the method proposed by Zhang et al. (UCLA CAM Report, vol. 11-32, 2011), but can also generate recovered images with even higher quality. This study reassures the feasibility of using the ? 0 “norm” for image restoration problems.  相似文献   

11.
Danvy??s functional unparsing problem (Danvy in J. Funct. Program. 8(6), 621?C625, 1998) is to implement a type-safe ??printf?? function, which converts a sequence of heterogeneous arguments to a string according to a given format. The dual problem is to implement a type-safe ??scanf?? function, which extracts a sequence of heterogeneous arguments from a string by interpreting (Friedman and Wand in LFP, pp. 348?C355, 1984 and in Essentials of Programming Languages, MIT Press, 2008) the same format as an equally heterogeneous sequence of patterns that binds zero or more variables. We derive multiple solutions to both problems (Wand in J. ACM 27(1), 164?C180, 1980) from their formal specifications (Wand in Theor. Comput. Sci. 20(1), 3?C32, 1982). On one hand, our solutions show how the Hindley-Milner type system, unextended, permits accessing heterogeneous sequences with the static assurance of type safety. On the other hand, our solutions demonstrate the use of control operators (Felleisen et al. in Proceedings of the 1988 ACM Conference on Lisp and Functional Programming, pp. 52?C62, ACM Press, New York, 1988; Wand in POPL 85: Conference Record of the Annual ACM Symposium on Principles of Programming Languages, vol. 16, ACM Press, New York, 1985; Meyer and Wand in Logics of Programs, Lecture Notes in Computer Science, vol. 193, pp. 219?C224, Springer, Berlin, 1985) to communicate with formats as coroutines (Wand in Proceedings of the 1980 ACM Conference on Lisp and Functional Programming, vol. 12, pp. 285?C299, ACM Press, New York, 1980 and Haynes et al. in LFP, pp. 293?C298, 1984).  相似文献   

12.
Given natural limitations on the length DNA sequences, designing phylogenetic reconstruction methods which are reliable under limited information is a crucial endeavor. There have been two approaches to this problem: reconstructing partial but reliable information about the tree (Mossel in IEEE Comput. Biol. Bioinform. 4:108–116, 2007; Daskalakis et al. in SIAM J. Discrete Math. 25:872–893, 2011; Daskalakis et al. in Proc. of RECOMB 2006, pp. 281–295, 2006; Gronau et al. in Proc. of the 19th Annual SODA 2008, pp. 379–388, 2008), and reaching “deeper” in the tree through reconstruction of ancestral sequences. In the latter category, Daskalakis et al. (Proc. of the 38th Annual STOC, pp. 159–168, 2006) settled an important conjecture of M. Steel (My favourite conjecture. Preprint, 2001), showing that, under the CFN model of evolution, all trees on n leaves with edge lengths bounded by the Ising model phase transition can be recovered with high probability from genomes of length O(logn) with a polynomial time algorithm. Their methods had a running time of O(n 10). Here we enhance our methods from Daskalakis et al. (Proc. of RECOMB 2006, pp. 281–295, 2006) with the learning of ancestral sequences and provide an algorithm for reconstructing a sub-forest of the tree which is reliable given available data, without requiring a-priori known bounds on the edge lengths of the tree. Our methods are based on an intuitive minimum spanning tree approach and run in O(n 3) time. For the case of full reconstruction of trees with edges under the phase transition, we maintain the same asymptotic sequence length requirements as in Daskalakis et al. (Proc. of the 38th Annual STOC, pp. 159–168, 2006), despite the considerably faster running time.  相似文献   

13.
A theoretical analysis tool, iterated optimal stopping, has been used as the basis of a numerical algorithm for American options under regime switching (Le and Wang in SIAM J Control Optim 48(8):5193–5213, 2010). Similar methods have also been proposed for American options under jump diffusion (Bayraktar and Xing in Math Methods Oper Res 70:505–525, 2009) and Asian options under jump diffusion (Bayraktar and Xing in Math Fin 21(1):117–143, 2011). An alternative method, local policy iteration, has been suggested in Huang et al. (SIAM J Sci Comput 33(5):2144–2168, 2011), and Salmi and Toivanen (Appl Numer Math 61:821–831, 2011). Worst case upper bounds on the convergence rates of these two methods suggest that local policy iteration should be preferred over iterated optimal stopping (Huang et al. in SIAM J Sci Comput 33(5):2144–2168, 2011). In this article, numerical tests are presented which indicate that the observed performance of these two methods is consistent with the worst case upper bounds. In addition, while these two methods seem quite different, we show that either one can be converted into the other by a simple rearrangement of two loops.  相似文献   

14.
We propose an effective procedure, the first one to our knowledge, for translating a proof term of the Calculus of Inductive Constructions (CIC), into a tactical expression of the high-level specification language of a CIC-based proof assistant like coq (Coq development team 2008) or matita (Asperti et al., J Autom Reason 39:109–139, 2007). This procedure, which should not be considered definitive at its present stage, is intended for translating the logical representation of a proof coming from any source, i.e. from a digital library or from another proof development system, into an equivalent proof presented in the proof assistant’s editable high-level format. To testify to effectiveness of our procedure, we report on its implementation in matita and on the translation of a significant set of proofs (Guidi, ACM Trans Comput Log 2009) from their logical representation as coq 7.3.1 (Coq development team 2002) CIC proof terms to their high-level representation as tactical expressions of matita’s user interface language.  相似文献   

15.
In this paper, inspired by some types of $BL$ -algebra filters (deductive systems) introduced in Haveshki et al. (Soft Comput 10:657–664, 2006), Kondo and Dudek (Soft Comput 12:419–423, 2008) and Turunen (Arch Math Log 40:467–473, 2001), we defined residuated lattice versions of them and study them in connection with Van Gasse et al. (Inf Sci 180(16):3006–3020, 2010), Lianzhen and Kaitai (Inf Sci 177:5725–5738, 2007), Zhu and Xu (Inf Sci 180:3614–3632, 2010). Also we consider some relations between these filters and quotient residuated lattice that are constructed via these filters.  相似文献   

16.
This work contrasts Giovanni Sartor’s view of inferential semantics of legal concepts (Sartor in Artif Intell Law 17:217–251, 2009) with a probabilistic model of theory formation (Kemp et al. in Cognition 114:165–196, 2010). The work further explores possibilities of implementing Kemp’s probabilistic model of theory formation in the context of mapping legal concepts between two individual legal systems. For implementing the legal concept mapping, we propose a cross-categorization approach that combines three mathematical models: the Bayesian Model of Generalization (BMG; Tenenbaum and Griffiths in Behav Brain Sci 4:629–640, 2001), the probabilistic model of theory formation, i.e., the Infinite Relational Model (IRM) first introduced by Kemp et al. (The twenty-first national conference on artificial intelligence, 2006, Cognition 114:165–196, 2010) and its extended model, i.e., the normal-IRM (n-IRM) proposed by Herlau et al. (IEEE International Workshop on Machine Learning for Signal Processing, 2012). We apply our cross-categorization approach to datasets where legal concepts related to educational systems are respectively defined by the Japanese- and the Danish authorities according to the International Standard Classification of Education. The main contribution of this work is the proposal of a conceptual framework of the cross-categorization approach that, inspired by Sartor (Artif Intell Law 17:217–251, 2009), attempts to explain reasoner’s inferential mechanisms.  相似文献   

17.
We first consider the problem of finding a maximum size stable matching if incomplete lists and ties are both allowed, but ties are on one side only. For this problem we give a simple, linear time 3/2-approximation algorithm, improving on the best known approximation factor 5/3 of Irving and Manlove (J. Comb. Optim., doi:10.1007/s10878-007-9133-x, 2007). Next, we show how this extends to the Hospitals/Residents problem with the same ratio if the residents have strict orders. We also give a simple linear time algorithm for the general problem with approximation factor 5/3, improving the best known 15/8-approximation algorithm of Iwama, Miyazaki and Yamauchi (SODA ??07: Proceedings of the Eighteenth Annual ACM-SIAM Symposium on Discrete Algorithms, pp.?288?C297, 2007). For the cases considered in this paper it is NP-hard to approximate within a factor of 21/19 by the result of Halldórsson et?al. (ACM Transactions on Algorithms 3(3):30, 2007). Our algorithms not only give better approximation ratios than the cited ones, but are much simpler and run significantly faster. Also we may drop a restriction used in (J. Comb. Optim., doi:10.1007/s10878-007-9133-x, 2007) and the analysis is substantially more moderate. Preliminary versions of this paper appeared in (Király, Egres Technical Report TR-2008-04, www.cs.elte.hu/egres/, 2008; Király in Proceedings of MATCH-UP 2008: Matching Under Preferences??Algorithms and Complexity, Satellite Workshop of ICALP, July 6, 2008, Reykjavík, Iceland, pp.?36?C45, 2008; Király in ESA 2008, Lecture Notes in Computer Science, vol.?5193, pp.?623?C634, 2008). For the related results obtained thenceforth see Sect.?5.  相似文献   

18.
We present several variants of the sunflower conjecture of Erd?s & Rado (J Lond Math Soc 35:85–90, 1960) and discuss the relations among them. We then show that two of these conjectures (if true) imply negative answers to the questions of Coppersmith & Winograd (J Symb Comput 9:251–280, 1990) and Cohn et al. (2005) regarding possible approaches for obtaining fast matrix-multiplication algorithms. Specifically, we show that the Erd?s–Rado sunflower conjecture (if true) implies a negative answer to the “no three disjoint equivoluminous subsets” question of Coppersmith & Winograd (J Symb Comput 9:251–280, 1990); we also formulate a “multicolored” sunflower conjecture in ${\mathbb{Z}_3^n}$ and show that (if true) it implies a negative answer to the “strong USP” conjecture of Cohn et al. (2005) (although it does not seem to impact a second conjecture in Cohn et al. (2005) or the viability of the general group-theoretic approach). A surprising consequence of our results is that the Coppersmith–Winograd conjecture actually implies the Cohn et al. conjecture. The multicolored sunflower conjecture in ${\mathbb{Z}_3^n}$ is a strengthening of the well-known (ordinary) sunflower conjecture in ${\mathbb{Z}_3^n}$ , and we show via our connection that a construction from Cohn et al. (2005) yields a lower bound of (2.51 . . .) n on the size of the largest multicolored 3-sunflower-free set, which beats the current best-known lower bound of (2.21 . . . ) n Edel (2004) on the size of the largest 3-sunflower-free set in ${\mathbb{Z}_3^n}$ .  相似文献   

19.
Matthias Möller 《Computing》2013,95(5):425-448
This paper is concerned with the extension of the algebraic flux-correction (AFC) approach (Kuzmin in Computational fluid and solid mechanics, Elsevier, Amsterdam, pp 887–888, 2001; J Comput Phys 219:513–531, 2006; Comput Appl Math 218:79–87, 2008; J Comput Phys 228:2517–2534, 2009; Flux-corrected transport: principles, algorithms, and applications, 2nd edn. Springer, Berlin, pp 145–192, 2012; J Comput Appl Math 236:2317–2337, 2012; Kuzmin et al. in Comput Methods Appl Mech Eng 193:4915–4946, 2004; Int J Numer Methods Fluids 42:265–295, 2003; Kuzmin and Möller in Flux-corrected transport: principles, algorithms, and applications. Springer, Berlin, 2005; Kuzmin and Turek in J Comput Phys 175:525–558, 2002; J Comput Phys 198:131–158, 2004) to nonconforming finite element methods for the linear transport equation. Accurate nonoscillatory approximations to convection-dominated flows are obtained by stabilizing the continuous Galerkin method by solution-dependent artificial diffusion. Its magnitude is controlled by a flux limiter. This concept dates back to flux-corrected transport schemes. The unique feature of AFC is that all information is extracted from the system matrices which are manipulated to satisfy certain mathematical constraints. AFC schemes have been devised with conforming $P_1$ and $Q_1$ finite elements in mind but this is not a prerequisite. Here, we consider their extension to the nonconforming Crouzeix–Raviart element (Crouzeix and Raviart in RAIRO R3 7:33–76, 1973) on triangular meshes and its quadrilateral counterpart, the class of rotated bilinear Rannacher–Turek elements (Rannacher and Turek in Numer Methods PDEs 8:97–111, 1992). The underlying design principles of AFC schemes are shown to hold for (some variant of) both elements. However, numerical tests for a purely convective flow and a convection–diffusion problem demonstrate that flux-corrected solutions are overdiffusive for the Crouzeix–Raviart element. Good resolution of smooth and discontinuous profiles is attested to $Q_1^\mathrm{nc}$ approximations on quadrilateral meshes. A synthetic benchmark is used to quantify the artificial diffusion present in conforming and nonconforming high-resolution schemes of AFC-type. Finally, the implementation of efficient sparse matrix–vector multiplications is addressed.  相似文献   

20.
We introduce a new framework for designing and analyzing algorithms. Our framework applies best to problems that are inapproximable according to the standard worst-case analysis. We circumvent such negative results by designing guarantees for classes of instances, parameterized according to properties of the optimal solution. Given our parameterized approximation, called PArametrized by the Signature of the Solution (PASS) approximation, we design algorithms with optimal approximation ratios for problems with additive and submodular objective functions such as the capacitated maximum facility location problems. We consider two types of algorithms for these problems. For greedy algorithms, our framework provides a justification for preferring a certain natural greedy rule over some alternative greedy rules that have been used in similar contexts. For LP-based algorithms, we show that the natural LP relaxation for these problems is not optimal in our framework. We design a new LP relaxation and show that this LP relaxation coupled with a new randomized rounding technique is optimal in our framework. In passing, we note that our results strictly improve over previous results of Kleinberg et al. (J. ACM 51(2):263–280, 2004) concerning the approximation ratio of the greedy algorithm.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号