首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 68 毫秒
1.
We construct a general-purpose multi-input functional encryption scheme in the private-key setting. Namely, we construct a scheme where a functional key corresponding to a function f enables a user holding encryptions of \(x_1, \ldots , x_t\) to compute \(f(x_1, \ldots , x_t)\) but nothing else. This is achieved starting from any general-purpose private-key single-input scheme (without any additional assumptions) and is proven to be adaptively secure for any constant number of inputs t. Moreover, it can be extended to a super-constant number of inputs assuming that the underlying single-input scheme is sub-exponentially secure. Instantiating our construction with existing single-input schemes, we obtain multi-input schemes that are based on a variety of assumptions (such as indistinguishability obfuscation, multilinear maps, learning with errors, and even one-way functions), offering various trade-offs between security assumptions and functionality. Previous and concurrent constructions of multi-input functional encryption schemes either rely on stronger assumptions and provided weaker security guarantees (Goldwasser et al. in Advances in cryptology—EUROCRYPT, 2014; Ananth and Jain in Advances in cryptology—CRYPTO, 2015), or relied on multilinear maps and could be proven secure only in an idealized generic model (Boneh et al. in Advances in cryptology—EUROCRYPT, 2015). In comparison, we present a general transformation that simultaneously relies on weaker assumptions and guarantees stronger security.  相似文献   

2.
We give generic constructions of several fundamental cryptographic primitives based on a new encryption primitive that combines circular security for bit encryption with the so-called reproducibility property (Bellare et al. in Public key cryptography—PKC 2003, vol. 2567, pp. 85–99, Springer, 2003). At the heart of our constructions is a novel technique which gives a way of de-randomizing reproducible public-key bit encryption schemes and also a way of reducing one-wayness conditions of a constructed trapdoor function family (TDF) to circular security of the base scheme. The main primitives that we build from our encryption primitive include k-wise one-way TDFs (Rosen and Segev in SIAM J Comput 39(7):3058–3088, 2010), chosen-ciphertext-attack-secure encryption and deterministic encryption. Our results demonstrate a new set of applications of circularly secure encryption beyond fully homomorphic encryption and symbolic soundness. Finally, we show the plausibility of our assumptions by showing that the decisional Diffie–Hellman-based circularly secure scheme of Boneh et al. (Advances in cryptology—CRYPTO 2008, vol. 5157, Springer, 2008) and the subgroup indistinguishability-based scheme of Brakerski and Goldwasser (Advances in cryptology—CRYPTO 2010, vol. 6223, pp. 1–20, Springer, 2010) are both reproducible.  相似文献   

3.
Functional encryption supports restricted decryption keys that allow users to learn specific functions of the encrypted messages. Although the vast majority of research on functional encryption has so far focused on the privacy of the encrypted messages, in many realistic scenarios it is crucial to offer privacy also for the functions for which decryption keys are provided. Whereas function privacy is inherently limited in the public-key setting, in the private-key setting it has a tremendous potential. Specifically, one can hope to construct schemes where encryptions of messages \(\mathsf{m}_1, \ldots , \mathsf{m}_T\) together with decryption keys corresponding to functions \(f_1, \ldots , f_T\), reveal essentially no information other than the values \(\{ f_i(\mathsf{m}_j)\}_{i,j\in [T]}\). Despite its great potential, the known function-private private-key schemes either support rather limited families of functions (such as inner products) or offer somewhat weak notions of function privacy. We present a generic transformation that yields a function-private functional encryption scheme, starting with any non-function-private scheme for a sufficiently rich function class. Our transformation preserves the message privacy of the underlying scheme and can be instantiated using a variety of existing schemes. Plugging in known constructions of functional encryption schemes, we obtain function-private schemes based either on the learning with errors assumption, on obfuscation assumptions, on simple multilinear-maps assumptions, and even on the existence of any one-way function (offering various trade-offs between security and efficiency).  相似文献   

4.
We present a new cryptographic primitive, called all-but-many encryption (ABME). An ABME scheme is a tag-based public-key encryption scheme with the following additional properties: A sender given the secret key can generate a fake ciphertext to open to any message with consistent randomness. In addition, anyone who does not own the secret key can neither distinguish a fake ciphertext from a real (honestly generated) one, nor produce a fake one (on a fresh tag) even after seeing many fake ciphertexts and their opening. A motivating application of ABME is universally composable (UC) commitment schemes. We prove that an ABME scheme implies a non-interactive UC commitment scheme that is secure against adaptive adversaries in the non-erasure model under a reusable common reference string. Previously, such a “fully equipped” UC commitment scheme has been known only in Canetti and Fischlin (CRYPTO 2001, vol 2139, Lecture notes in computer science. Springer, Heidelberg, pp 19–40, 2001), Canetti et al. (STOC 2002, pp 494–503, 2002), with expansion factor \(O(\kappa )\), meaning that to commit \(\lambda \) bits, communication strictly requires \(O(\lambda \kappa )\) bits, where \(\kappa \) denotes the security parameter. We provide a general framework for constructing ABME and several concrete instantiations from a variety of assumptions. In particular, we present an ABME scheme with expansion factor O(1) from DCR-related assumptions, which results in showing the first fully equipped UC commitment scheme with a constant expansion factor. In addition, the DCR-based ABME scheme can be transformed to an all-but-many lossy trapdoor function (ABM-LTF), proposed by Hofheinz (EUROCRYPT 2012, vol 7237, Lecture notes in computer science. Springer, Heidelberg, pp 209–227, 2012), with a better lossy rate than Hofheinz (2012).  相似文献   

5.
Bellare, Boldyreva, and O’Neill (CRYPTO ’07) initiated the study of deterministic public-key encryption as an alternative in scenarios where randomized encryption has inherent drawbacks. The resulting line of research has so far guaranteed security only for adversarially chosen-plaintext distributions that are independent of the public key used by the scheme. In most scenarios, however, it is typically not realistic to assume that adversaries do not take the public key into account when attacking a scheme. We show that it is possible to guarantee meaningful security even for plaintext distributions that depend on the public key. We extend the previously proposed notions of security, allowing adversaries to adaptively choose plaintext distributions after seeing the public key, in an interactive manner. The only restrictions we make are that: (1) plaintext distributions are unpredictable (as is essential in deterministic public-key encryption), and (2) the number of plaintext distributions from which each adversary is allowed to adaptively choose is upper bounded by \(2^{p}\), where p can be any predetermined polynomial in the security parameter and plaintext length. For example, with \(p = 0\) we capture plaintext distributions that are independent of the public key, and with \(p = O(s \log s)\) we capture, in particular, all plaintext distributions that are samplable by circuits of size s. Within our framework we present both constructions in the random oracle model based on any public-key encryption scheme, and constructions in the standard model based on lossy trapdoor functions (thus, based on a variety of number-theoretic assumptions). Previously known constructions heavily relied on the independence between the plaintext distributions and the public key for the purposes of randomness extraction. In our setting, however, randomness extraction becomes significantly more challenging once the plaintext distributions and the public key are no longer independent. Our approach is inspired by research on randomness extraction from seed-dependent distributions. Underlying our approach is a new generalization of a method for such randomness extraction, originally introduced by Trevisan and Vadhan (FOCS ’00) and Dodis (Ph.D. Thesis, MIT, ’00).  相似文献   

6.
Motivated by applications in large storage systems, we initiate the study of incremental deterministic public-key encryption. Deterministic public-key encryption, introduced by Bellare, Boldyreva, and O’Neill (CRYPTO ’07), provides an alternative to randomized public-key encryption in various scenarios where the latter exhibits inherent drawbacks. A deterministic encryption algorithm, however, cannot satisfy any meaningful notion of security for low-entropy plaintexts distributions, but Bellare et al. demonstrated that a strong notion of security can in fact be realized for relatively high-entropy plaintext distributions. In order to achieve a meaningful level of security, a deterministic encryption algorithm should be typically used for encrypting rather long plaintexts for ensuring a sufficient amount of entropy. This requirement may be at odds with efficiency constraints, such as communication complexity and computation complexity in the presence of small updates. Thus, a highly desirable property of deterministic encryption algorithms is incrementality: Small changes in the plaintext translate into small changes in the corresponding ciphertext. We present a framework for modeling the incrementality of deterministic public-key encryption. Our framework extends the study of the incrementality of cryptography primitives initiated by Bellare, Goldreich and Goldwasser (CRYPTO ’94). Within our framework, we propose two schemes, which we prove to enjoy an optimal tradeoff between their security and incrementality up to lower-order factors. Our first scheme is a generic method which can be based on any deterministic public-key encryption scheme, and, in particular, can be instantiated with any semantically secure (randomized) public-key encryption scheme in the random-oracle model. Our second scheme is based on the Decisional Diffie–Hellman assumption in the standard model. The approach underpinning our schemes is inspired by the fundamental “sample-then-extract” technique due to Nisan and Zuckerman (JCSS ’96) and refined by Vadhan (J. Cryptology ’04), and by the closely related notion of “locally computable extractors” due to Vadhan. Most notably, whereas Vadhan used such extractors to construct private-key encryption schemes in the bounded-storage model, we show that techniques along these lines can also be used to construct incremental public-key encryption schemes.  相似文献   

7.
A one-way function is d-local if each of its outputs depends on at most d input bits. In Applebaum et al. (SIAM J Comput 36(4):845–888, 2006), it was shown that, under relatively mild assumptions, there exist 4-local one-way functions (OWFs). This result is not far from optimal as it is not hard to show that there are no 2-local OWFs. The gap was partially closed in Applebaum et al. (2006) by showing that the existence of 3-local OWFs is implied by the intractability of decoding a random linear code (or equivalently the hardness of learning parity with noise). In this note we provide further evidence for the existence of 3-local OWFs. We construct a 3-local OWF based on the assumption that a random function of (arbitrarily large) constant locality is one-way. [A closely related assumption was previously made by Goldreich (Studies in Complexity and Cryptography. Miscellanea on the Interplay between Randomness and Computation, pp. 76–87, 2011).] Our proof consists of two steps: (1) we show that, under the above assumption, Random Local Functions remain hard to invert even when some information on the preimage x is leaked and (2) such “robust” local one-way functions can be converted to 3-local one-way functions via a new construction of semi-private randomized encoding. We believe that these results may be of independent interest.  相似文献   

8.
In this paper, we present three digital signature schemes with tight security reductions in the random oracle model. Our first signature scheme is a particularly efficient version of the short exponent discrete log-based scheme of Girault et al. (J Cryptol 19(4):463–487, 2006). Our scheme has a tight reduction to the decisional short discrete logarithm problem, while still maintaining the non-tight reduction to the computational version of the problem upon which the original scheme of Girault et al. is based. The second signature scheme we construct is a modification of the scheme of Lyubashevsky (Advances in Cryptology—ASIACRYPT 2009, vol 5912 of Lecture Notes in Computer Science, pp 598–616, Tokyo, Japan, December 6–10, 2009. Springer, Berlin, 2009) that is based on the worst-case hardness of the shortest vector problem in ideal lattices. And the third scheme is a very simple signature scheme that is based directly on the hardness of the subset sum problem. We also present a general transformation that converts what we term \(lossy \) identification schemes into signature schemes with tight security reductions. We believe that this greatly simplifies the task of constructing and proving the security of such signature schemes.  相似文献   

9.
Recently multiserver queues with setup times have been extensively studied because they have applications in power-saving data centers. A challenging model is the M/M/c/Setup queue where a server is turned off when it is idle and is turned on if there are some waiting jobs. Recently, Gandhi et al. (in: Proceedings of the ACM SIGMETRICS, pp. 153–166, ACM, 2013; Queueing Syst. 77(2):177–209, 2014) obtain the generating function of the number of jobs in the system, as well as the Laplace transform of the response time using the recursive renewal reward approach and the distributional Little’s law (Keilson and Servi in Oper Res Lett 7(5):223– 227, 1988). In this paper, we derive exact solutions for the joint stationary queue length distribution of the same model using two alternative methodologies: generating function approach and matrix analytic method. The generating function approach yields exact closed form expressions for the joint stationary queue length distribution and the conditional decomposition formula. On the other hand, the matrix analytic approach leads to an exact recursive algorithm to calculate the joint stationary distribution and performance measures so as to provide some application insights.  相似文献   

10.
We present a positive obfuscation result for a traditional cryptographic functionality. This positive result stands in contrast to well-known impossibility results (Barak et al. in Advances in Cryptology??CRYPTO??01, 2002), for general obfuscation and recent impossibility and implausibility (Goldwasser and Kalai in 46th IEEE Symposium on Foundations of Computer Science (FOCS), pp.?553?C562, 2005) results for obfuscation of many cryptographic functionalities. Whereas other positive obfuscation results in the standard model apply to very simple point functions (Canetti in Advances in Cryptology??CRYPTO??97, 1997; Wee in 37th ACM Symposium on Theory of Computing (STOC), pp.?523?C532, 2005), our obfuscation result applies to the significantly more complex and widely-used re-encryption functionality. This functionality takes a ciphertext for message m encrypted under Alice??s public key and transforms it into a ciphertext for the same message m under Bob??s public key. To overcome impossibility results and to make our results meaningful for cryptographic functionalities, our scheme satisfies a definition of obfuscation which incorporates more security-aware provisions.  相似文献   

11.
A computational secret-sharing scheme is a method that enables a dealer, that has a secret, to distribute this secret among a set of parties such that a “qualified” subset of parties can efficiently reconstruct the secret while any “unqualified” subset of parties cannot efficiently learn anything about the secret. The collection of “qualified” subsets is defined by a monotone Boolean function. It has been a major open problem to understand which (monotone) functions can be realized by a computational secret-sharing scheme. Yao suggested a method for secret-sharing for any function that has a polynomial-size monotone circuit (a class which is strictly smaller than the class of monotone functions in \({\mathsf {P}}\)). Around 1990 Rudich raised the possibility of obtaining secret-sharing for all monotone functions in \({\mathsf {NP}}\): in order to reconstruct the secret a set of parties must be “qualified” and provide a witness attesting to this fact. Recently, Garg et al. (Symposium on theory of computing conference, STOC, pp 467–476, 2013) put forward the concept of witness encryption, where the goal is to encrypt a message relative to a statement \(x\in L\) for a language \(L\in {\mathsf {NP}}\) such that anyone holding a witness to the statement can decrypt the message; however, if \(x\notin L\), then it is computationally hard to decrypt. Garg et al. showed how to construct several cryptographic primitives from witness encryption and gave a candidate construction. One can show that computational secret-sharing implies witness encryption for the same language. Our main result is the converse: we give a construction of a computational secret-sharing scheme for any monotone function in \({\mathsf {NP}}\) assuming witness encryption for \({\mathsf {NP}}\) and one-way functions. As a consequence we get a completeness theorem for secret-sharing: computational secret-sharing scheme for any single monotone \({\mathsf {NP}}\)-complete function implies a computational secret-sharing scheme for every monotone function in \({\mathsf {NP}}\).  相似文献   

12.
We address one of the foundational problems in cryptography: the bias of coin-flipping protocols. Coin-flipping protocols allow mutually distrustful parties to generate a common unbiased random bit, guaranteeing that even if one of the parties is malicious, it cannot significantly bias the output of the honest party. A classical result by Cleve (Proceedings of the 18th annual ACM symposium on theory of computing, pp 364–369, 1986) showed that for any two-party \(r\)-round coin-flipping protocol there exists an efficient adversary that can bias the output of the honest party by \(\varOmega (1/r)\). However, the best previously known protocol only guarantees \(O(1/\sqrt{r})\) bias, and the question of whether Cleve’s bound is tight has remained open for more than 20 years. In this paper, we establish the optimal trade-off between the round complexity and the bias of two-party coin-flipping protocols. Under standard assumptions (the existence of oblivious transfer), we show that Cleve’s lower bound is tight: We construct an \(r\)-round protocol with bias \(O(1/r)\).  相似文献   

13.
We revisit the security definitions of blind signatures as proposed by Pointcheval and Stern (J Cryptol 13(3):361–396, 2000). Security comprises the notions of one-more unforgeability, preventing a malicious user to generate more signatures than requested, and of blindness, averting a malicious signer to learn useful information about the user’s messages. Although this definition is well established nowadays, we show that there are still desirable security properties that fall outside of the model. More precisely, in the original unforgeability definition is not excluded that an adversary verifiably uses the same message m for signing twice and is then still able to produce another signature for a new message \(m'\ne m\). Intuitively, this should not be possible; yet, it is not captured in the original definition, because the number of signatures equals the number of requests. We thus propose a stronger notion, called honest-user unforgeability, that covers these attacks. We give a simple and efficient transformation that turns any unforgeable blind signature scheme (with deterministic verification) into an honest-user unforgeable one.  相似文献   

14.
The existence of succinct non-interactive arguments for NP (i.e., non-interactive computationally sound proofs where the verifier’s work is essentially independent of the complexity of the NP non-deterministic verifier) has been an intriguing question for the past two decades. Other than CS proofs in the random oracle model (Micali in SIAM J Comput 30(4):1253–1298, 2000), prior to our work the only existing candidate construction is based on an elaborate assumption that is tailored to a specific protocol (Di Crescenzo and Lipmaa in Proceedings of the 4th conference on computability in Europe, 2008). We formulate a general and relatively natural notion of an extractable collision-resistant hash function (ECRH) and show that, if ECRHs exist, then a modified version of Di Crescenzo and Lipmaa’s protocol is a succinct non-interactive argument for NP. Furthermore, the modified protocol is actually a succinct non-interactive adaptive argument of knowledge (SNARK). We then propose several candidate constructions for ECRHs and relaxations thereof. We demonstrate the applicability of SNARKs to various forms of delegation of computation, to succinct non-interactive zero-knowledge arguments, and to succinct two-party secure computation. Finally, we show that SNARKs essentially imply the existence of ECRHs, thus demonstrating the necessity of the assumption. Going beyond \(\hbox {ECRH}\)s, we formulate the notion of extractable one-way functions (\(\hbox {EOWF}\)s). Assuming the existence of a natural variant of \(\hbox {EOWF}\)s, we construct a two-message selective-opening-attack-secure commitment scheme and a three-round zero-knowledge argument of knowledge. Furthermore, if the \(\hbox {EOWF}\)s are concurrently extractable, the three-round zero-knowledge protocol is also concurrent zero knowledge. Our constructions circumvent previous black-box impossibility results regarding these protocols by relying on \(\hbox {EOWF}\)s as the non-black-box component in the security reductions.  相似文献   

15.
In Jung et al. (Electron Lett 48(10):557–558, 1), a double noise coupling scheme was proposed for ΔΣ analog-to-digital converters (ADCs) to achieve wideband and high accuracy performance combined with low power consumption. In this paper, an improved version of double noise coupling ΔΣ ADC is presented. The improved architecture reduces the power consumption significantly, by reducing the output swing of the second integrator in the modulator. Also, the improved double noise coupling ΔΣ ADC relaxes the feedback timing of the modulator using a triple sampling technique (Kanazawa et al. in IEEE Custom Integrated Circuit Conference, 2). Thus, there is no need to have high-speed comparator and DEM circuitry even for high-speed applications. By using both techniques, the performance of the double noise coupling ΔΣ ADC can be improved significantly.  相似文献   

16.
As new network applications have arisen rapidly in recent years, it is becoming more difficult to predict the exact traffic pattern of a network. In consequence, a routing scheme based on a single traffic demand matrix often leads to a poor performance. Oblivious routing (Racke in Proceedings of the 43rd annual IEEE symposium on foundations of computer science 43–52, 2002) is a technique for tackling the traffic demand uncertainty problem. A routing scheme derived from this principle intends to achieve a predicable performance for a set of traffic matrixes. Oblivious routing can certainly be an effective tool to handle traffic demand uncertainty in a wireless mesh network (WMN). However, a WMN has an additional tool that a wireline network does not have: dynamic bandwidth allocation. A router in a WMN can dynamically assign bandwidth to its attached links. This capability has never been exploited previously in works on oblivious routing for a spatial time division multiple access (STDMA) based WMN. Another useful insight is that although it is impossible to know the exact traffic matrix, it is relatively easy to estimate the amount of the traffic routed through a link when the routing scheme is given. Based on these two insights, we propose a new oblivious routing framework for STDMA WMNs. Both analytical models and simulation results are presented in this paper to prove that the performance—in terms of throughput, queue lengths, and fairness—of the proposed scheme can achieve significant gains over conventional oblivious routing schemes for STDMA based WMNs.  相似文献   

17.
We study the problem of constructing locally computable universal one-way hash functions (UOWHFs) \(\mathcal {H}:\{0,1\}^n \rightarrow \{0,1\}^m\). A construction with constant output locality, where every bit of the output depends only on a constant number of bits of the input, was established by Applebaum et al. (SIAM J Comput 36(4):845–888, 2006). However, this construction suffers from two limitations: (1) it can only achieve a sublinear shrinkage of \(n-m=n^{1-\epsilon }\) and (2) it has a super-constant input locality, i.e., some inputs influence a large super-constant number of outputs. This leaves open the question of realizing UOWHFs with constant output locality and linear shrinkage of \(n-m= \epsilon n\), or UOWHFs with constant input locality and minimal shrinkage of \(n-m=1\). We settle both questions simultaneously by providing the first construction of UOWHFs with linear shrinkage, constant input locality and constant output locality. Our construction is based on the one-wayness of “random” local functions—a variant of an assumption made by Goldreich (Studies in Complexity and Cryptography, 76–87, 2011; ECCC 2010). Using a transformation of Ishai et al. (STOC, 2008), our UOWHFs give rise to a digital signature scheme with a minimal additive complexity overhead: signing n-bit messages with security parameter \(\kappa \) takes only \(O(n+\kappa )\) time instead of \(O(n\kappa )\) as in typical constructions. Previously, such signatures were only known to exist under an exponential hardness assumption. As an additional contribution, we obtain new locally computable hardness amplification procedures for UOWHFs that preserve linear shrinkage.  相似文献   

18.
Deterministic public-key encryption, introduced by Bellare, Boldyreva, and O’Neill (CRYPTO ’07), provides an alternative to randomized public-key encryption in various scenarios where the latter exhibits inherent drawbacks. A deterministic encryption algorithm, however, cannot satisfy any meaningful notion of security when the plaintext is distributed over a small set. Bellare et al. addressed this difficulty by requiring semantic security to hold only when the plaintext has high min-entropy from the adversary’s point of view. In many applications, however, an adversary may obtain auxiliary information that is related to the plaintext. Specifically, when deterministic encryption is used as a building block of a larger system, it is rather likely that plaintexts do not have high min-entropy from the adversary’s point of view. In such cases, the framework of Bellare et al. might fall short from providing robust security guarantees. We formalize a framework for studying the security of deterministic public-key encryption schemes with respect to auxiliary inputs. Given the trivial requirement that the plaintext should not be efficiently recoverable from the auxiliary input, we focus on hard-to-invert auxiliary inputs. Within this framework, we propose two schemes: the first is based on the d-linear assumption for any d≥1 (including, in particular, the decisional Diffie–Hellman assumption), and the second is based on a rather general class of subgroup indistinguishability assumptions (including, in particular, the quadratic residuosity assumption and Paillier’s composite residuosity assumption). Our schemes are secure with respect to any auxiliary input that is subexponentially hard to invert (assuming the standard hardness of the underlying computational assumptions). In addition, our first scheme is secure even in the multi-user setting where related plaintexts may be encrypted under multiple public keys. Constructing a scheme that is secure in the multi-user setting (even without considering auxiliary inputs) was identified by Bellare et al. as an important open problem.  相似文献   

19.
In this paper, we study the two fundamental functionalities oblivious polynomial evaluation in the exponent and set-intersection and introduce a new technique for designing efficient secure protocols for these problems (and others). Our starting point is the technique (Benabbas et al. in CRYPTO, 2011) for verifiable delegation of polynomial evaluations, using algebraic PRFs. We use this tool, that is useful to achieve verifiability in the outsourced setting, in order to achieve privacy in the standard two-party setting. Our results imply new simple and efficient oblivious polynomial evaluation (OPE) protocols. We further show that our OPE protocols are readily used for secure set-intersection, implying much simpler protocols in the plain model. As a side result, we demonstrate the usefulness of algebraic PRFs for various search functionalities, such as keyword search and oblivious transfer with adaptive queries. Our protocols are secure under full simulation-based definitions in the presence of malicious adversaries.  相似文献   

20.
Our review (Nusinovich et al. Journal of Infrared, Millimeter, and Terahertz Waves, 35, 325, 2014) proved to be of interest for gyrotron researchers, gyrotron users, and specialists in neighboring fields of physics but underwent a fair criticism for a number of historical omissions. So my co-authors G. S. Nusinovich and M. K. A. Thumm advised me to supplement our paper (Nusinovich et al. Journal of Infrared, Millimeter, and Terahertz Waves, 35, 325, 2014) with the following memoir.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号