首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Robust and accurate detection of the pupil position is a key building block for head-mounted eye tracking and prerequisite for applications on top, such as gaze-based human–computer interaction or attention analysis. Despite a large body of work, detecting the pupil in images recorded under real-world conditions is challenging given significant variability in the eye appearance (e.g., illumination, reflections, occlusions, etc.), individual differences in eye physiology, as well as other sources of noise, such as contact lenses or make-up. In this paper we review six state-of-the-art pupil detection methods, namely ElSe (Fuhl et al. in Proceedings of the ninth biennial ACM symposium on eye tracking research & applications, ACM. New York, NY, USA, pp 123–130, 2016), ExCuSe (Fuhl et al. in Computer analysis of images and patterns. Springer, New York, pp 39–51, 2015), Pupil Labs (Kassner et al. in Adjunct proceedings of the 2014 ACM international joint conference on pervasive and ubiquitous computing (UbiComp), pp 1151–1160, 2014. doi: 10.1145/2638728.2641695), SET (Javadi et al. in Front Neuroeng 8, 2015), Starburst (Li et al. in Computer vision and pattern recognition-workshops, 2005. IEEE Computer society conference on CVPR workshops. IEEE, pp 79–79, 2005), and ?wirski (?wirski et al. in Proceedings of the symposium on eye tracking research and applications (ETRA). ACM, pp 173–176, 2012. doi: 10.1145/2168556.2168585). We compare their performance on a large-scale data set consisting of 225,569 annotated eye images taken from four publicly available data sets. Our experimental results show that the algorithm ElSe (Fuhl et al. 2016) outperforms other pupil detection methods by a large margin, offering thus robust and accurate pupil positions on challenging everyday eye images.  相似文献   

2.
Flutter shutter (coded exposure) cameras allow to extend indefinitely the exposure time for uniform motion blurs. Recently, Tendero et al. (SIAM J Imaging Sci 6(2):813–847, 2013) proved that for a fixed known velocity v the gain of a flutter shutter in terms of root means square error (RMSE) cannot exceeds a 1.1717 factor compared to an optimal snapshot. The aforementioned bound is optimal in the sense that this 1.1717 factor can be attained. However, this disheartening bound is in direct contradiction with the recent results by Cossairt et al. (IEEE Trans Image Process 22(2), 447–458, 2013). Our first goal in this paper is to resolve mathematically this discrepancy. An interesting question was raised by the authors of reference (IEEE Trans Image Process 22(2), 447–458, 2013). They state that the “gain for computational imaging is significant only when the average signal level J is considerably smaller than the read noise variance \(\sigma _r^2\)” (Cossairt et al., IEEE Trans Image Process 22(2), 447–458, 2013, p. 5). In other words, according to Cossairt et al. (IEEE Trans Image Process 22(2), 447–458, 2013) a flutter shutter would be more efficient when used in low light conditions e.g., indoor scenes or at night. Our second goal is to prove that this statement is based on an incomplete camera model and that a complete mathematical model disproves it. To do so we propose a general flutter shutter camera model that includes photonic, thermal (The amplifier noise may also be mentioned as another source of background noise, which can be included w.l.o.g. in the thermal noise) and additive [The additive (sensor readout) noise may contain other components such as reset noise and quantization noise. We include them w.l.o.g. in the readout.] (sensor readout, quantification) noises of finite variances. Our analysis provides exact formulae for the mean square error of the final deconvolved image. It also allows us to confirm that the gain in terms of RMSE of any flutter shutter camera is bounded from above by 1.1776 when compared to an optimal snapshot. The bound is uniform with respect to the observation conditions and applies for any fixed known velocity. Incidentally, the proposed formalism and its consequences also apply to e.g., the Levin et al. motion-invariant photography (ACM Trans Graphics (TOG), 27(3):71:1–71:9, 2008; Method and apparatus for motion invariant imag- ing, October 1 2009. US Patent 20,090,244,300, 2009) and variant (Cho et al. Motion blur removal with orthogonal parabolic exposures, 2010). In short, we bring mathematical proofs to the effect of contradicting the claims of Cossairt et al. (IEEE Trans Image Process 22(2), 447–458, 2013). Lastly, this paper permits to point out the kind of optimization needed if one wants to turn the flutter shutter into a useful imaging tool.  相似文献   

3.
Phononic crystals (PnC) with a specifically designed liquid-filled defect have been recently introduced as a novel sensor platform (Lucklum et al. in Sens Actuators B Chem 171–172:271–277, 2012). Sensors based on this principle feature a band gap covering the typical input span of the measurand as well as a narrow transmission peak within the band gap where the frequency of maximum transmission is governed by the measurand. This approach has been applied for determination of volumetric properties of liquids (Lucklum et al. in Sens Actuators B Chem 171–172:271–277, 2012; Oseev et al. in Sens Actuators B Chem 189:208–212, 2013; Lucklum and Li in Meas Sci Technol 20(12):124014, 2009) and has demonstrated attractive sensitivity. One way to improve sensitivity requires higher probing frequencies in the range of 100 MHz and above. In this range surface acoustic wave (SAW) devices are an established basis for sensors. We have performed first tests towards a PnC microsensors (Lucklum et al. in Towards a SAW based phononic crystal sensor platform. In: 2013 Joint European frequency and time forum and international frequency control symposium (EFTF/IFC), pp 69–72, 2013). The respective feature size of the PnC SAW sensor has dimensions in the range of 10 µm and below. Whereas those dimensions are state of the art for common MEMS materials, etching of holes and cavities in piezoelectric materials that have an aspect ratio diameter/depth is still challenging. In this contribution we describe an improved technological process able to realize considerably deep and uniform holes in a SAW substrate.  相似文献   

4.
The objective of this paper is to focus on one of the “building blocks” of additive manufacturing technologies, namely selective laser-processing of particle-functionalized materials. Following a series of work in Zohdi (Int J Numer Methods Eng 53:1511–1532, 2002; Philos Trans R Soc Math Phys Eng Sci 361(1806):1021–1043, 2003; Comput Methods Appl Mech Eng 193(6–8):679–699, 2004; Comput Methods Appl Mech Eng 196:3927–3950, 2007; Int J Numer Methods Eng 76:1250–1279, 2008; Comput Methods Appl Mech Eng 199:79–101, 2010; Arch Comput Methods Eng 1–17. doi: 10.1007/s11831-013-9092-6, 2013; Comput Mech Eng Sci 98(3):261–277, 2014; Comput Mech 54:171–191, 2014; J Manuf Sci Eng ASME doi: 10.1115/1.4029327, 2015; CIRP J Manuf Sci Technol 10:77–83, 2015; Comput Mech 56:613–630, 2015; Introduction to computational micromechanics. Springer, Berlin, 2008; Introduction to the modeling and simulation of particulate flows. SIAM (Society for Industrial and Applied Mathematics), Philadelphia, 2007; Electromagnetic properties of multiphase dielectrics: a primer on modeling, theory and computation. Springer, Berlin, 2012), a laser-penetration model, in conjunction with a Finite Difference Time Domain Method using an immersed microstructure method, is developed. Because optical, thermal and mechanical multifield coupling is present, a recursive, staggered, temporally-adaptive scheme is developed to resolve the internal microstructural fields. The time step adaptation allows the numerical scheme to iteratively resolve the changing physical fields by refining the time-steps during phases of the process when the system is undergoing large changes on a relatively small time-scale and can also enlarge the time-steps when the processes are relatively slow. The spatial discretization grids are uniform and dense enough to capture fine-scale changes in the fields. The microstructure is embedded into the spatial discretization and the regular grid allows one to generate a matrix-free iterative formulation which is amenable to rapid computation, with minimal memory requirements, making it ideal for laptop computation. Numerical examples are provided to illustrate the modeling and simulation approach, which by design, is straightforward to computationally implement, in order to be easily utilized by researchers in the field. More advanced conduction models, based on thermal-relaxation, which are a key feature of fast-pulsing laser technologies, are also discussed.  相似文献   

5.
Several philosophical issues in connection with computer simulations rely on the assumption that results of simulations are trustworthy. Examples of these include the debate on the experimental role of computer simulations (Parker in Synthese 169(3):483–496, 2009; Morrison in Philos Stud 143(1):33–57, 2009), the nature of computer data (Barberousse and Vorms, in: Durán, Arnold (eds) Computer simulations and the changing face of scientific experimentation, Cambridge Scholars Publishing, Barcelona, 2013; Humphreys, in: Durán, Arnold (eds) Computer simulations and the changing face of scientific experimentation, Cambridge Scholars Publishing, Barcelona, 2013), and the explanatory power of computer simulations (Krohs in Int Stud Philos Sci 22(3):277–292, 2008; Durán in Int Stud Philos Sci 31(1):27–45, 2017). The aim of this article is to show that these authors are right in assuming that results of computer simulations are to be trusted when computer simulations are reliable processes. After a short reconstruction of the problem of epistemic opacity, the article elaborates extensively on computational reliabilism, a specified form of process reliabilism with computer simulations located at the center. The article ends with a discussion of four sources for computational reliabilism, namely, verification and validation, robustness analysis for computer simulations, a history of (un)successful implementations, and the role of expert knowledge in simulations.  相似文献   

6.
We propose a new computing model called chemical reaction automata (CRAs) as a simplified variant of reaction automata (RAs) studied in recent literature (Okubo in RAIRO Theor Inform Appl 48:23–38 2014; Okubo et al. in Theor Comput Sci 429:247–257 2012a, Theor Comput Sci 454:206–221 2012b). We show that CRAs in maximally parallel manner are computationally equivalent to Turing machines, while the computational power of CRAs in sequential manner coincides with that of the class of Petri nets, which is in marked contrast to the result that RAs (in both maximally parallel and sequential manners) have the computing power of Turing universality (Okubo 2014; Okubo et al. 2012a). Intuitively, CRAs are defined as RAs without inhibitor functioning in each reaction, providing an offline model of computing by chemical reaction networks (CRNs). Thus, the main results in this paper not only strengthen the previous result on Turing computability of RAs but also clarify the computing powers of inhibitors in RA computation.  相似文献   

7.
Testing for stationarity and unit roots has become standard practice in time series analysis and while many tests have known asymptotic properties, their small sample performance is sometimes less-well understood. Researchers rely on response surface regressions to provide small sample critical values for use in applied work. In this paper an updated series of Monte Carlo experiments provides response surface estimates of the critical 1, 5, and 10 % values of the Kwiatkowski et al. (J Econ 54: 91–115, 1992) test of stationarity and its generalization by Hobijn et al. (Stat Neerlandica 58(4): 483–502, 2004).  相似文献   

8.
XGC1 and M3D-C 1 are two fusion plasma simulation codes being developed at Princeton Plasma Physics Laboratory. XGC1 uses the particle-in-cell method to simulate gyrokinetic neoclassical physics and turbulence (Chang et al. Phys Plasmas 16(5):056108, 2009; Ku et al. Nucl Fusion 49:115021, 2009; Admas et al. J Phys 180(1):012036, 2009). M3D-\(C^1\) solves the two-fluid resistive magnetohydrodynamic equations with the \(C^1\) finite elements (Jardin J comput phys 200(1):133–152, 2004; Jardin et al. J comput Phys 226(2):2146–2174, 2007; Ferraro and Jardin J comput Phys 228(20):7742–7770, 2009; Jardin J comput Phys 231(3):832–838, 2012; Jardin et al. Comput Sci Discov 5(1):014002, 2012; Ferraro et al. Sci Discov Adv Comput, 2012; Ferraro et al. International sherwood fusion theory conference, 2014). This paper presents the software tools and libraries that were combined to form the geometry and automatic meshing procedures for these codes. Specific consideration has been given to satisfy the mesh configuration and element shape quality constraints of XGC1 and M3D-\(C^1\).  相似文献   

9.
Kaltofen (Randomness in computation, vol 5, pp 375–412, 1989) proved the remarkable fact that multivariate polynomial factorization can be done efficiently, in randomized polynomial time. Still, more than twenty years after Kaltofen’s work, many questions remain unanswered regarding the complexity aspects of polynomial factorization, such as the question of whether factors of polynomials efficiently computed by arithmetic formulas also have small arithmetic formulas, asked in Kopparty et al. (2014), and the question of bounding the depth of the circuits computing the factors of a polynomial. We are able to answer these questions in the affirmative for the interesting class of polynomials of bounded individual degrees, which contains polynomials such as the determinant and the permanent. We show that if \({P(x_{1},\ldots,x_{n})}\) is a polynomial with individual degrees bounded by r that can be computed by a formula of size s and depth d, then any factor \({f(x_{1},\ldots, x_{n})}\) of \({P(x_{1},\ldots,x_{n})}\) can be computed by a formula of size \({\textsf{poly}((rn)^{r},s)}\) and depth d + 5. This partially answers the question above posed in Kopparty et al. (2014), who asked if this result holds without the dependence on r. Our work generalizes the main factorization theorem from Dvir et al. (SIAM J Comput 39(4):1279–1293, 2009), who proved it for the special case when the factors are of the form \({f(x_{1}, \ldots, x_{n}) \equiv x_{n} - g(x_{1}, \ldots, x_{n-1})}\). Along the way, we introduce several new technical ideas that could be of independent interest when studying arithmetic circuits (or formulas).  相似文献   

10.
We introduce a family of generalized prolate spheroidal wave functions (PSWFs) of order \(-1,\) and develop new spectral schemes for second-order boundary value problems. Our technique differs from the differentiation approach based on PSWFs of order zero in Kong and Rokhlin (Appl Comput Harmon Anal 33(2):226–260, 2012); in particular, our orthogonal basis can naturally include homogeneous boundary conditions without the re-orthogonalization of Kong and Rokhlin (2012). More notably, it leads to diagonal systems or direct “explicit” solutions to 1D Helmholtz problems in various situations. Using a rule optimally pairing the bandwidth parameter and the number of basis functions as in Kong and Rokhlin (2012), we demonstrate that the new method significantly outperforms the Legendre spectral method in approximating highly oscillatory solutions. We also conduct a rigorous error analysis of this new scheme. The idea and analysis can be extended to generalized PSWFs of negative integer order for higher-order boundary value and eigenvalue problems.  相似文献   

11.
This paper proposes two new non-reference image quality metrics that can be adopted by the state-of-the-art image/video denoising algorithms for auto-denoising. The first metric is proposed based on the assumption that the noise should be independent of the original image. A direct measurement of this dependence is, however, impractical due to the relatively low accuracy of existing denoising method. The proposed metric thus tackles the homogeneous regions and highly-structured regions separately. Nevertheless, this metric is only stable when the noise level is relatively low. Most denoising algorithms reduce noise by (weighted) averaging repeated noisy measurements. As a result, another metric is proposed for high-level noise based on the fact that more noisy measurements will be required when the noise level increases. The number of measurements before converging is thus related to the quality of noisy images. Our patch-matching based metric proposes to iteratively find and add noisy image measurements for averaging until there is no visible difference between two successively averaged images. Both metrics are evaluated on LIVE2 (Sheikh et al. in LIVE image quality assessment database release 2: 2013) and TID2013 (Ponomarenko et al. in Color image database tid2013: Peculiarities and preliminary results: 2005) data sets using standard Spearman and Kendall rank-order correlation coefficients (ROCC), showing that they subjectively outperforms current state-of-the-art no-reference metrics. Quantitative evaluation w.r.t. different level of synthetic noisy images also demonstrates consistently higher performance over state-of-the-art non-reference metrics when used for image denoising.  相似文献   

12.
13.
In this paper we investigate the problem of partitioning an input string T in such a way that compressing individually its parts via a base-compressor C gets a compressed output that is shorter than applying C over the entire T at once. This problem was introduced in Buchsbaum et al. (Proc. of 11th ACM-SIAM Symposium on Discrete Algorithms, pp. 175–184, 2000; J. ACM 50(6):825–851, 2003) in the context of table compression, and then further elaborated and extended to strings and trees by Ferragina et al. (J. ACM 52:688–713, 2005; Proc. of 46th IEEE Symposium on Foundations of Computer Science, pp. 184–193, 2005) and Mäkinen and Navarro (Proc. of 14th Symposium on String Processing and Information Retrieval, pp. 229–241, 2007). Unfortunately, the literature offers poor solutions: namely, we know either a cubic-time algorithm for computing the optimal partition based on dynamic programming (Buchsbaum et al. in J. ACM 50(6):825–851, 2003; Giancarlo and Sciortino in Proc. of 14th Symposium on Combinatorial Pattern Matching, pp. 129–143, 2003), or few heuristics that do not guarantee any bounds on the efficacy of their computed partition (Buchsbaum et al. in Proc. of 11th ACM-SIAM Symposium on Discrete Algorithms, pp. 175–184, 2000; J. ACM 50(6):825–851, 2003), or algorithms that are efficient but work in some specific scenarios (such as the Burrows-Wheeler Transform, see e.g. Ferragina et al. in J. ACM 52:688–713, 2005; Mäkinen and Navarro in Proc. of 14th Symposium on String Processing and Information Retrieval, pp. 229–241, 2007) and achieve compression performance that might be worse than the optimal-partitioning by a Ω(log?n/log?log?n) factor. Therefore, computing efficiently the optimal solution is still open (Buchsbaum and Giancarlo in Encyclopedia of Algorithms, pp. 939–942, 2008). In this paper we provide the first algorithm which computes in O(nlog?1+ε n) time and O(n) space, a partition of T whose compressed output is guaranteed to be no more than (1+ε)-worse the optimal one, where ε may be any positive constant fixed in advance. This result holds for any base-compressor C whose compression performance can be bounded in terms of the zero-th or the k-th order empirical entropy of the text T. We will also discuss extensions of our results to BWT-based compressors and to the compression booster of Ferragina et al. (J. ACM 52:688–713, 2005).  相似文献   

14.
Polydimethylsiloxane (PDMS) is a commonly used material in biomedical engineering (Sollier et al. in Lab Chip 11(22):3752–3765, 2011; Palchesko et al. in PLoS ONE 7(12):e51499, 2012; Berthier et al. in Lab Chip 12(7):1224–1237, 2012). Its elastic nature makes PDMS especially attractive for microfluidic large-scale integration (mLSI) technology where micromechanical valves are actuated by deflecting a PDMS membrane under pressure. Therefore, understanding and control of PDMS elastic properties have commercial and scientific significance. In this study, we have investigated the effects of pre-polymer/cross-linker storage conditions on the mechanical properties of cured PDMS films as well as on microfluidic devices. We have showed that when the uncured components of PDMS are exposed to different humidity conditions, the elasticity of the PDMS changes and this is revealed as a change in the Young’s modulus of the cured PDMS. The high humidity (~85%) exposure for 24 h causes PDMS to become softer as confirmed by a significant decrease in the Young’s modulus values from 1.2 to 0.9 MPa. Furthermore, as the PDMS is exposed to high humidity conditions for longer periods (72 h), the Young’s modulus decreases down to 0.7 MPa. We found that exposing only the pre-polymer PDMS (Part A) to humid air does not alter the cured PDMS properties significantly, whereas exposure of the cross-linker (Part B) is responsible for the elasticity change. We have strictly controlled the storage humidity to build more reliable microfluidic chips using mLSI. As a result, actuation pressure of valves (10 psi) and defects of devices (in <30% of chips) are significantly reduced. These results suggest that to improve the manufacturing yield and reliability of PDMS devices, storage humidity should be controlled immediately after the material synthesis.  相似文献   

15.
What does it take to implement a computer? Answers to this question have often focused on what it takes for a physical system to implement an abstract machine. As Joslin (Minds Mach 16:29–41, 2006) observes, this approach neglects cases of software implementation—cases where one machine implements another by running a program. These cases, Joslin argues, highlight serious problems for mapping accounts of computer implementation—accounts that require a mapping between elements of a physical system and elements of an abstract machine. The source of these problems is the complexity introduced by common design features of ordinary computers, features that would be relevant to any real-world software implementation (e.g., virtual memory). While Joslin is focused on contemporary views, his discussion also suggests a counterexample to recent mapping accounts which hold that genuine implementation requires simple mappings (Millhouse in Br J Philos Sci, 2017.  https://doi.org/10.1093/bjps/axx046; Wallace in The emergent multiverse, Oxford University Press, Oxford, 2014). In this paper, I begin by clarifying the nature of software implementation and disentangling it from closely related phenomena like emulation and simulation. Next, I argue that Joslin overstates the degree of complexity involved in his target cases and that these cases may actually give us reasons to favor simplicity-based criteria over relevant alternatives. Finally, I propose a novel problem for simplicity-based criteria and suggest a tentative solution.  相似文献   

16.
17.
The main purpose of this work is to provide a mathematical proof of our previously proposed orthogonal similarity transformation (OST)-based sensitivity analysis method (Zhao et al. Struct Multidisc Optim 50(3):517–522 2014a, Comput Methods Appl Mech Engrg 273:204–218 c); the proof is designed to show the method’s computational effectiveness. Theoretical study of computational efficiency for both robust topology optimization and robust concurrent topology optimization problems shows the necessity of the OST-based sensitivity analysis method for practical problems. Numerical studies were conducted to demonstrate the computational accuracy of the OST-based sensitivity analysis method and its efficiency over the conventional method. The research leads us to conclude that the OST-based sensitivity analysis method can bring considerable computational savings when used for large-scale robust topology optimization problems, as well as robust concurrent topology optimization problems.  相似文献   

18.
Goldreich et al. (CRYPTO 1999) proved that the promise problem for estimating the Shannon entropy of a distribution sampled by a given circuit is NISZK-complete. We consider the analogous problem for estimating the min-entropy and prove that it is SBP-complete, where SBP is the class of promise problems that correspond to approximate counting of NP witnesses. The result holds even when the sampling circuits are restricted to be 3-local. For logarithmic-space samplers, we observe that this problem is NP-complete by a result of Lyngsø and Pedersen on hidden Markov models (JCSS 65(3):545–569, 2002).  相似文献   

19.
In this paper we present a model for the calculation of pressure drop of three-phase liquid–liquid–gas slug flow in microcapillaries of a circular cross section. Introduced models consist of terms attributing for frictional and interfacial pressure drop, incorporating the presence of a stagnant thin film at the wall of the channel. Different formulations of the interfacial pressure drop equation were employed, using expressions developed by Bretherton (J Fluid Mech 10:166–188, 1961), Warnier et al. (Microfluid Nanofluid 8:33–45, 2010) or Ratulowski and Chang (Phys Fluids A 1:1642–1655, 1989). Models were validated experimentally using oleic acid–water–nitrogen and heptane–water–nitrogen three-phase flows in round Teflon or Radel R microchannels of 254- and 508-µm nominal inner diameter, for capillary numbers Ca b between 10?4 and 4.9 × 10?1 and Reynolds numbers Re between 0.095 and 300. Best agreement between measured and calculated values of pressure drop, with relative error between ?22 and 19 % or ?20 and 16 %, is reached for Warnier’s or Ratulowski and Chang’s interfacial pressure drop equation, respectively. The results prove that three-phase slug flow pressure drop can be successfully predicted by extending existing two-phase slug flow correlations. Good agreement of Bretherton’s equation was reached only at lower Ca numbers, indicating that an extension of the interfacial pressure drop equation as performed by Warnier et al. (Microfluid Nanofluid 8:33–45, 2010) or Ratulowski and Chang (Phys Fluids A 1:1642–1655, 1989) for higher capillary numbers is necessary. Additionally it was demonstrated that pressure drop increases substantially if dry slug flow occurs or if microchannels with significant surface roughness are employed. Those influences were not accounted for in the models presented.  相似文献   

20.
Fast Image Inpainting Based on Coherence Transport   总被引:2,自引:0,他引:2  
High-quality image inpainting methods based on nonlinear higher-order partial differential equations have been developed in the last few years. These methods are iterative by nature, with a time variable serving as iteration parameter. For reasons of stability a large number of iterations can be needed which results in a computational complexity that is often too large for interactive image manipulation.Based on a detailed analysis of stationary first order transport equations the current paper develops a fast noniterative method for image inpainting. It traverses the inpainting domain by the fast marching method just once while transporting, along the way, image values in a coherence direction robustly estimated by means of the structure tensor. Depending on a measure of coherence strength the method switches continuously between diffusion and directional transport. It satisfies a comparison principle. Experiments with the inpainting of gray tone and color images show that the novel algorithm meets the high level of quality of the methods of Bertalmio et al. (SIG-GRAPH ’00: Proc. 27th Conf. on Computer Graphics and Interactive Techniques, New Orleans, ACM Press/Addison-Wesley, New York, pp. 417–424, 2000), Masnou (IEEE Trans. Image Process. 11(2):68–76, 2002), and Tschumperlé (Int. J. Comput. Vis. 68(1):65–82, 2006), while being faster by at least an order of magnitude.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号