首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The r-round (iterated) Even–Mansour cipher (also known as key-alternating cipher) defines a block cipher from r fixed public n-bit permutations \(P_1,\ldots ,P_r\) as follows: Given a sequence of n-bit round keys \(k_0,\ldots ,k_r\), an n-bit plaintext x is encrypted by xoring round key \(k_0\), applying permutation \(P_1\), xoring round key \(k_1\), etc. The (strong) pseudorandomness of this construction in the random permutation model (i.e., when the permutations \(P_1,\ldots ,P_r\) are public random permutation oracles that the adversary can query in a black-box way) was studied in a number of recent papers, culminating with the work of Chen and Steinberger (EUROCRYPT 2014), who proved that the r-round Even–Mansour cipher is indistinguishable from a truly random permutation up to \(\mathcal {O}(2^{\frac{rn}{r+1}})\) queries of any adaptive adversary (which is an optimal security bound since it matches a simple distinguishing attack). All results in this entire line of work share the common restriction that they only hold under the assumption that the round keys \(k_0,\ldots ,k_r\) and the permutations \(P_1,\ldots ,P_r\) are independent. In particular, for two rounds, the current state of knowledge is that the block cipher \(E(x)=k_2\oplus P_2(k_1\oplus P_1(k_0\oplus x))\) is provably secure up to \(\mathcal {O}(2^{2n/3})\) queries of the adversary, when \(k_0\), \(k_1\), and \(k_2\) are three independent n-bit keys, and \(P_1\) and \(P_2\) are two independent random n-bit permutations. In this paper, we ask whether one can obtain a similar bound for the two-round Even–Mansour cipher from just one n-bit key and one n-bit permutation. Our answer is positive: When the three n-bit round keys \(k_0\), \(k_1\), and \(k_2\) are adequately derived from an n-bit master key k, and the same permutation P is used in place of \(P_1\) and \(P_2\), we prove a qualitatively similar \(\widetilde{\mathcal {O}}(2^{2n/3})\) security bound (in the random permutation model). To the best of our knowledge, this is the first “beyond the birthday bound” security result for AES-like ciphers that does not assume independent round keys.  相似文献   

2.
In this article, the acceleration attained in gain recovery dynamics of travelling-wave-type semiconductor optical amplifier (SOA) at the expense of structural optimization is illustrated via numerical simulations. A pump–probe scheme has been utilized in order to study the outcomes of optimization of SOA operational and structural parameters on its effective gain recovery time (\({\tau _\mathrm{e}}\)). A set of optimized SOA parameters are formulated from gain recovery dynamics studies after keeping practical implementation considerations in vision. Further, the impacts of altering SOA structural and operational parameters such as injection current (I), amplifier length (L), active region width (w), active region thickness (t) and optical confinement factor (\({\varGamma } \)) on gain recovery time improvement achieved are further investigated on the performance of a cross-gain modulation (XGM) in SOA-based all-optical half-subtracter in terms of two designated performance metrics: quality factor (Q-factor) and extinction ratio (ER). It has been revealed that reduced gain recovery time-optimized SOAs-based all-optical half-subtracter arranged in a co-propagating manner exhibits improved Q-factor and ER (dB) performance at high bit rates of operation (\(\le \)80 Gbps).  相似文献   

3.
This paper implemented a new skin lesion detection method based on the genetic algorithm (GA) for optimizing the neutrosophic set (NS) operation to reduce the indeterminacy on the dermoscopy images. Then, k-means clustering is applied to segment the skin lesion regions. Therefore, the proposed method is called optimized neutrosophic k-means (ONKM). On the training images set, an initial value of \(\alpha \) in the \(\alpha \)-mean operation of the NS is used with the GA to determine the optimized \(\alpha \) value. The Jaccard index is used as the fitness function during the optimization process. The GA found the optimal \(\alpha \) in the \(\alpha \)-mean operation as \(\alpha _{\mathrm{optimal}} =0.0014\) in the NS, which achieved the best performance using five fold cross-validation. Afterward, the dermoscopy images are transformed into the neutrosophic domain via three memberships, namely true, indeterminate, and false, using \(\alpha _{\mathrm{optimal}}\). The proposed ONKM method is carried out to segment the dermoscopy images. Different random subsets of 50 images from the ISIC 2016 challenge dataset are used from the training dataset during the fivefold cross-validation to train the proposed system and determine \(\alpha _{\mathrm{optimal}}\). Several evaluation metrics, namely the Dice coefficient, specificity, sensitivity, and accuracy, are measured for performance evaluation of the test images using the proposed ONKM method with \(\alpha _{\mathrm{optimal}} =0.0014\) compared to the k-means, and the \(\gamma \)k-means methods. The results depicted the dominance of the ONKM method with \(99.29\pm 1.61\%\) average accuracy compared with k-means and \(\gamma \)k-means methods.  相似文献   

4.
Lattice sphere decoder (LSD) searches lattice points in space within a certain radius (d), where the closest point obtained is considered the solution. It is well known in LSD, when the initial radius (d) increases, the complexity increase. Therefore, this paper aims to obtain an initial radius (d) exact expression to reduce the system complexity with reasonable performance. The derived expression shows that initial radius (d) depends on lattice dimension n, signal-to-noise ratio (\(\gamma\)), and noise variance \(\sigma^{2}\). Hence, this paper proposed a new LSD for BDTS based on this initial radius technique. The proposed LSD achieves a good balance between complexity and performance. Other analytical expressions for complexity and performance in relation with (d) are also derived. It has been observed that the convergence between the analytical system performance and complexity with their respective simulation results.  相似文献   

5.
Let g be an element of prime order p in an abelian group, and let α∈? p . We show that if g,g α , and \(g^{\alpha^{d}}\) are given for a positive divisor d of p?1, the secret key α can be computed deterministically in \(O(\sqrt{p/d}+\sqrt{d})\) exponentiations by using \(O(\max\{\sqrt{p/d},\sqrt{d}\})\) storage. If \(g^{\alpha^{i}}\) (i=0,1,2,…,2d) is given for a positive divisor d of p+1, α can be computed in \(O(\sqrt{p/d}+d)\) exponentiations by using \(O(\max\{\sqrt{p/d},\sqrt{d}\})\) storage. We also propose space-efficient but probabilistic algorithms for the same problem, which have the same computational complexities with the deterministic algorithm.As applications of the proposed algorithms, we show that the strong Diffie–Hellman problem and related problems with public \(g^{\alpha},\ldots,g^{\alpha^{d}}\) have computational complexity up to \(O(\sqrt{d}/\log p)\) less than the generic algorithm complexity of the discrete logarithm problem when p?1 (resp. p+1) has a divisor dp 1/2 (resp. dp 1/3). Under the same conditions for d, the algorithm is also applicable to recovering the secret key in \(O(\sqrt{p/d}\cdot \log p)\) for Boldyreva’s blind signature scheme and the textbook ElGamal scheme when d signature or decryption queries are allowed.  相似文献   

6.
Göös et al. (ITCS, 2015) have recently introduced the notion of Zero-Information Arthur–Merlin Protocols (\(\mathsf {ZAM}\)). In this model, which can be viewed as a private version of the standard Arthur–Merlin communication complexity game, Alice and Bob are holding a pair of inputs x and y, respectively, and Merlin, the prover, attempts to convince them that some public function f evaluates to 1 on (xy). In addition to standard completeness and soundness, Göös et al., require a “zero-knowledge” property which asserts that on each yes-input, the distribution of Merlin’s proof leaks no information about the inputs (xy) to an external observer. In this paper, we relate this new notion to the well-studied model of Private Simultaneous Messages (\(\mathsf {PSM}\)) that was originally suggested by Feige et al. (STOC, 1994). Roughly speaking, we show that the randomness complexity of \(\mathsf {ZAM}\) corresponds to the communication complexity of \(\mathsf {PSM}\) and that the communication complexity of \(\mathsf {ZAM}\) corresponds to the randomness complexity of \(\mathsf {PSM}\). This relation works in both directions where different variants of \(\mathsf {PSM}\) are being used. As a secondary contribution, we reveal new connections between different variants of \(\mathsf {PSM} \) protocols which we believe to be of independent interest. Our results give rise to better \(\mathsf {ZAM}\) protocols based on existing \(\mathsf {PSM}\) protocols, and to better protocols for conditional disclosure of secrets (a variant of \(\mathsf {PSM}\)) from existing \(\mathsf {ZAM} \)s.  相似文献   

7.
We analyze the security of the Thorp shuffle, or, equivalently, a maximally unbalanced Feistel network. Roughly said, the Thorp shuffle on N cards mixes any \(N^{1-1/r}\) of them in \(O(r\lg N)\) steps. Correspondingly, making O(r) passes of maximally unbalanced Feistel over an n-bit string ensures CCA security to \(2^{n(1-1/r)}\) queries. Our results, which employ Markov chain techniques, particularly couplings, help to justify a practical, although still relatively inefficient, blockcipher-based scheme for deterministically enciphering credit card numbers and the like.  相似文献   

8.
We study the problem of constructing locally computable universal one-way hash functions (UOWHFs) \(\mathcal {H}:\{0,1\}^n \rightarrow \{0,1\}^m\). A construction with constant output locality, where every bit of the output depends only on a constant number of bits of the input, was established by Applebaum et al. (SIAM J Comput 36(4):845–888, 2006). However, this construction suffers from two limitations: (1) it can only achieve a sublinear shrinkage of \(n-m=n^{1-\epsilon }\) and (2) it has a super-constant input locality, i.e., some inputs influence a large super-constant number of outputs. This leaves open the question of realizing UOWHFs with constant output locality and linear shrinkage of \(n-m= \epsilon n\), or UOWHFs with constant input locality and minimal shrinkage of \(n-m=1\). We settle both questions simultaneously by providing the first construction of UOWHFs with linear shrinkage, constant input locality and constant output locality. Our construction is based on the one-wayness of “random” local functions—a variant of an assumption made by Goldreich (Studies in Complexity and Cryptography, 76–87, 2011; ECCC 2010). Using a transformation of Ishai et al. (STOC, 2008), our UOWHFs give rise to a digital signature scheme with a minimal additive complexity overhead: signing n-bit messages with security parameter \(\kappa \) takes only \(O(n+\kappa )\) time instead of \(O(n\kappa )\) as in typical constructions. Previously, such signatures were only known to exist under an exponential hardness assumption. As an additional contribution, we obtain new locally computable hardness amplification procedures for UOWHFs that preserve linear shrinkage.  相似文献   

9.
The lifetime of a network can be increased by increasing the network energy. The network energy can be increased either increasing the number of sensors or increasing the initial energy of some sensors without increasing their numbers. Increasing network energy by deploying extra sensors is about ten times costlier than that using some sensors of high energy. Increasing the initial energy of some sensors leads to heterogeneous nodes in the network. In this paper, we propose a multilevel heterogeneous network model that is characterized by two types of parameters: primary parameter and secondary parameters. The primary parameter decides the level of heterogeneity by assuming the values of secondary parameters. This model can describe a network up to nth level of heterogeneity (n is a finite number). We evaluate the network performance by applying the HEED, a clustering protocol, on this model, naming it as MLHEED (Multi Level HEED) protocol. For n level of heterogeneity, this protocol is denoted by MLHEED-n. The numbers of nodes of each type in any level of heterogeneity are determined by the secondary model parameter. The MLHEED protocol (for all level heterogeneity) considers two variables, i.e., residual energy and node density, for deciding the cluster heads. We also consider fuzzy implementation of the MLHEED in which four variables are used to decide the cluster heads: residual energy, node density, average energy, and distance between base station and the sensor nodes. In this work, we illustrate the network model up to seven levels (\(1\le n\le 7\)). Experimentally, as the level of heterogeneity increases, the rate of energy dissipation decreases and hence the nodes stay alive for longer time. The MLHEED-m, \(m=2,3,4,5,6,7\), increase the network lifetime by \(73.05, 143.40, 213.17, 267.90, 348.60, 419.10\,\%\), respectively, by increasing the network energy as \(40, 57, 68.5, 78, 84, 92.5\,\%\) with respect to the original HEED protocol. In case of fuzzy implementation, the MLHEEDFL-m, \(m=2,3,4,5,6,7,\) increases the network lifetime by \(282.7, 378.5, 435.78, 498.50, 582.63, 629.79\,\%\), respectively, corresponding to the same increase in the network energy as that of the MLHEED (all levels) with respect to the original HEED. The fuzzy implementation of the HEED, MLHEEDFL-1, increases the network lifetime by \(176.6\,\%\) with respect to the original HEED with no increase in the network energy.  相似文献   

10.
The performance of two-way relay (TWR)-assisted mixed radio-frequency/free-space optical (RF/FSO) system is evaluated in this letter. The proposed system employs decode-and-forward relaying phenomena where the relay is basically an interfacing node between two source nodes \(S_1\) and \(S_2\), where \(S_1\) supports RF signal, while \(S_2\) supports FSO signal. The TWR-assisted system helps in achieving spectral efficiency by managing bidirectional communication in three time slots, thus maximizing the achievable rate of the network. The RF link is subjected to generalized \(\eta -\mu \) distribution, and the optical channel is affected by path loss, pointing errors and gamma–gamma (gg) distributed atmospheric turbulence. The novel expressions for the probability density function and cumulative distribution function of the equivalent end-to-end signal-to-noise ratio (SNR) are derived. Capitalizing on these derived statistics of end-to-end SNR, the expressions of outage probability and the bit-error rate for different binary modulations and M-ary modulations are provided.  相似文献   

11.
The slide attack, presented by Biryukov and Wagner, has already become a classical tool in cryptanalysis of block ciphers. While it was used to mount practical attacks on a few cryptosystems, its practical applicability is limited, as typically, its time complexity is lower bounded by \(2^n\) (where n is the block size). There are only a few known scenarios in which the slide attack performs better than the \(2^n\) bound. In this paper, we concentrate on efficient slide attacks, whose time complexity is less than \(2^n\). We present a number of new attacks that apply in scenarios in which previously known slide attacks are either inapplicable, or require at least \(2^n\) operations. In particular, we present the first known slide attack on a Feistel construction with a 3-round self-similarity, and an attack with practical time complexity of \(2^{40}\) on a 128-bit key variant of the GOST block cipher with unknown S-boxes. The best previously known attack on the same variant, with known S-boxes (by Courtois), has time complexity of \(2^{91}\).  相似文献   

12.
In this paper, we investigate the impact of the transmitter finite extinction ratio and the receiver carrier recovery phase offset on the error performance of two optically preamplified hybrid M-ary pulse position modulation (PPM) systems with coherent detection. The first system, referred to as PB-mPPM, combines polarization division multiplexing (PDM) with binary phase-shift keying and M-ary PPM, and the other system, referred to as PQ-mPPM, combines PDM with quadrature phase-shift keying and M-ary PPM. We provide new expressions for the probability of bit error for PB-mPPM and PQ-mPPM under finite extinction ratios and phase offset. The extinction ratio study indicates that the coherent systems PB-mPPM and PQ-mPPM outperform the direct-detection ones. It also shows that at \(P_b=10^{-9}\) PB-mPPM has a slight advantage over PQ-mPPM. For example, for a symbol size \(M=16\) and extinction ratio \(r=30\) dB, PB-mPPM requires 0.6 dB less SNR per bit than PQ-mPPM to achieve \(P_b=10^{-9}\). This investigation demonstrates that PB-mPPM is less complex and less sensitive to the variations of the offset angle \(\theta \) than PQ-mPPM. For instance, for \(M=16\), \(r=30\) dB, and \(\theta =10^{\circ }\) PB-mPPM requires 1.6 dB less than PQ-mPPM to achieve \(P_b=10^{-9}\). However, PB-mPPM enhanced robustness to phase offset comes at the expense of a reduced bandwidth efficiency when compared to PQ-mPPM. For example, for \(M=2\) its bandwidth efficiency is 60 % that of PQ-mPPM and \(\approx 86\,\%\) for \(M=1024\). For these reasons, PB-mPPM can be considered a reasonable design trade-off for M-ary PPM systems.  相似文献   

13.
The flash-evaporation technique was utilized to fabricate undoped 1.35-μm and 1.2-μm thick lead iodide films at substrate temperatures \( T_{\rm{s}} = 150 \)°C and 200°C, respectively. The films were deposited onto a coplanar comb-like copper (Cu-) electrode pattern, previously coated on glass substrates to form lateral metal–semiconductor–metal (MSM-) structures. The as-measured constant-temperature direct-current (dc)-voltage (\( I\left( {V;T} \right) - V \)) curves of the obtained lateral coplanar Cu-PbI2-Cu samples (film plus electrode) displayed remarkable ohmic behavior at all temperatures (\( T = 18 - 90\,^\circ {\hbox{C}} \)). Their dc electrical resistance \( R_{\rm{dc}} (T \)) revealed a single thermally-activated conduction mechanism over the temperature range with activation energy \( E_{\rm{act}} \approx 0.90 - 0.98 \,{\hbox{eV}} \), slightly less than half of room-temperature bandgap energy \( E_{\rm{g}} \) (\( \approx \,2.3\, {\hbox{eV}} \)) of undoped 2H-polytype PbI2 single crystals. The undoped flash-evaporated \( {\hbox{PbI}}_{\rm{x}} \) thin films were homogeneous and almost stoichiometric (\( x \approx 1.87 \)), in contrast to findings on lead iodide films prepared by other methods, and were highly crystalline hexagonal 2H-polytypic structure with c-axis perpendicular to the surface of substrates maintained at \( T_{\rm{s}} { \gtrsim }150^\circ {\hbox{C}} \). Photoconductivity measurements made on these lateral Cu-PbI2-Cu-structures under on–off visible-light illumination reveal a feeble photoresponse for long wavelengths (\( \lambda > 570\,{\hbox{nm}} \)), but a strong response to blue light of photon energy \( E_{\rm{ph}} \) \( \approx \,2.73 \, {\hbox{eV}} \) (\( > E_{\rm{g}} \)), due to photogenerated electron–hole (e–h) pairs via direct band-to-band electronic transitions. The constant-temperature/dc voltage current–time \( I\left( {T,V} \right) - t \) curves of the studied lateral PbI2 MSM-structures at low ambient temperatures (\( T < 50^\circ {\hbox{C}} \)), after cutting off the blue-light illumination, exhibit two trapping mechanisms with different relaxation times. These strongly depend on \( V \) and \( T \), with thermally generated charge carriers in the PbI2 mask photogenerated (e–h) pairs at higher temperatures.  相似文献   

14.
An oracle chooses a function f from the set of n bits strings to itself, which is either a randomly chosen permutation or a randomly chosen function. When queried by an n-bit string w, the oracle computes f(w), truncates the m last bits, and returns only the first \(n-m\) bits of f(w). How many queries does a querying adversary need to submit in order to distinguish the truncated permutation from the (truncated) function? In Hall et al. (Building PRFs from PRPs, Springer, Berlin, 1998) showed an algorithm for determining (with high probability) whether or not f is a permutation, using \(O(2^{\frac{m+n}{2}})\) queries. They also showed that if \(m < n/7\), a smaller number of queries will not suffice. For \(m > n/7\), their method gives a weaker bound. In this note, we first show how a modification of the approximation method used by Hall et al. can solve the problem completely. It extends the result to practically any m, showing that \(\varOmega (2^{\frac{m+n}{2}})\) queries are needed to get a non-negligible distinguishing advantage. However, more surprisingly, a better bound for the distinguishing advantage, which we can write, in a simplified form, as \(O\left( \min \left\{ \frac{q^2}{2^n},\,\frac{q}{2^{\frac{n+m}{2}}},\,1\right\} \right) ,\) can be obtained from a result of Stam published, in a different context, already in 1978. We also show that, at least in some cases, this bound is tight.  相似文献   

15.
In typical applications of homomorphic encryption, the first step consists for Alice of encrypting some plaintext m under Bob’s public key \(\mathsf {pk}\) and of sending the ciphertext \(c = \mathsf {HE}_{\mathsf {pk}}(m)\) to some third-party evaluator Charlie. This paper specifically considers that first step, i.e., the problem of transmitting c as efficiently as possible from Alice to Charlie. As others suggested before, a form of compression is achieved using hybrid encryption. Given a symmetric encryption scheme \(\mathsf {E}\), Alice picks a random key k and sends a much smaller ciphertext \(c' = (\mathsf {HE}_{\mathsf {pk}}(k), \mathsf {E}_k(m))\) that Charlie decompresses homomorphically into the original c using a decryption circuit \(\mathcal {C}_{{\mathsf {E}^{-1}}}\). In this paper, we revisit that paradigm in light of its concrete implementation constraints, in particular \(\mathsf {E}\) is chosen to be an additive IV-based stream cipher. We investigate the performances offered in this context by Trivium, which belongs to the eSTREAM portfolio, and we also propose a variant with 128-bit security: Kreyvium. We show that Trivium, whose security has been firmly established for over a decade, and the new variant Kreyvium has excellent performance. We also describe a second construction, based on exponentiation in binary fields, which is impractical but sets the lowest depth record to \(8\) for \(128\)-bit security.  相似文献   

16.
This paper presents and evaluates the performance of wireless networks that utilize the decode-and-forward relay. This multi-hop relaying scheme communicates over Extended Generalized-\({\mathcal {K}}\) (\(\hbox {EG}{\mathcal {K}}\)) composite fading channels to create performance evaluation. To this effect, new exact and easy to compute formulas for several performance metrics are derived. More specifically, new and exact-form mathematical formulas are derived for the cumulative distribution function, the generalized moments of the overall end-to-end signal-to-noise ratio, the outage probability (\({\hbox {P}}_{\text{out}}\)), the ergodic capacity (\({\mathcal {C}}_{\text{Ergodic}}\)), the moment generating function, and the average error probability (\({\hbox {Pr(e)}}\)) for different modulation schemes. Moreover, we carried out a series of computer simulation experiments in order to testify the accuracy of the derived framework. Finally, we discussed the impact of different parameters including fading/shadowing parameters, transmitted power and the number of hops on the derived expressions.  相似文献   

17.
It is considered methods of spectral identification of hydroacoustic signals based on comparison of \(\bar x_\alpha \) (f) dependences on spectrum frequency f of quantiles \(\bar x_\alpha \) (f) of hydroacoustic signals Fourier (Hartley) spectrum. The identification results are robust to abnormal interferences in channels of hydroacousic signals spreading, registration and reproduction.  相似文献   

18.
In wireless sensor network (WSN), data transfer from source to the destination needs an optimized network with less energy consumption. However, the optimized network with connected dominating set (CDS) of graph theory plays an important role in WSN for virtual backbone network formation. The connected dominating set is NP-hard problem with larger in size. However, the problem of the size of the network and NP-hard makes the researchers concentrate to improve the algorithms for virtual backbone. In this paper, we propose a semigraph contiguous prevalent set (SCPS) algorithm for semigraph structure to reduce NP-hard and size of the virtual backbone. The virtual back bone with SCPS algorithm applies with various protocols such as AODV, DSR, and DSDV for performance evaluation. From simulation result, we observe the network parameters such as size, diameter, average hoping and waiting time between the nodes are reduced. The proposed SCPS construction method retains the best performance ratio of \((2 + \ln \Delta_{a} )|opt|,\) whereas \(|opt|\) is the size of any optimal adjacent dominating set (ADS) and \(\Delta_{a}\) is the maximum adjacent degree of all nodes of the network and has the low time complexity of O(n 2 ), whereas n denotes the network size. Furthermore, the proposed SCPS virtual backbone in hardware implementation proves the network life time increases about 86% of 9 V battery of the node.  相似文献   

19.
This paper describes Census, a protocol for data aggregation and statistical counting in MANETs. Census operates by circulating a set of tokens in the network using biased random walks such that each node is visited by at least one token. The protocol is structure-free so as to avoid high messaging overhead for maintaining structure in the presence of node mobility. It biases the random walks of tokens so as to achieve fast cover time; the bias involves short albeit multi-hop gradients that guide the tokens towards hitherto unvisited nodes. Census thus achieves a cover time of O(N) and message overhead of \(O(N\,log(N))\) where N is the number of nodes. Notably, it enjoys scalability and robustness, which we demonstrate via simulations in networks ranging from 100 to 4000 nodes under different network densities and mobility models. We also observe a speedup by a factor of k when k different tokens are used (\(1 \le k \le \sqrt{N}\)).  相似文献   

20.
A fractor is a simple fractional-order system. Its transfer function is \(1/Fs^{\alpha }\); the coefficient, F, is called the fractance, and \(\alpha \) is called the exponent of the fractor. This paper presents how a fractor can be realized, using RC ladder circuit, meeting the predefined specifications on both F and \(\alpha \). Besides, commonly reported fractors have \(\alpha \) between 0 and 1. So, their constant phase angles (CPA) are always restricted between \(0^{\circ }\) and \(-90^{\circ }\). This work has employed GIC topology to realize fractors from any of the four quadrants, which means fractors with \(\alpha \) between \(-\)2 and +2. Hence, one can achieve any desired CPA between \(+180^{\circ }\) and \(-180^{\circ }\). The paper also exhibits how these GIC parameters can be used to tune the fractance of emulated fractors in real time, thus realizing dynamic fractors. In this work, a number of fractors are developed as per proposed technique, their impedance characteristics are studied, and fractance values are tuned experimentally.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号