首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 437 毫秒
1.
In this paper we present a secure and efficient transaction protocol that provides the anonymity and can detect the double spending. The proposed payment system is based on the ElGamal encryption scheme, the ElGamal signature scheme and the ElGamal blind signature protocol. We show that our transaction protocol is secure and efficient. We give the definitions of unlinkability and unforgeability of our security model and we prove that the proposed transaction protocol is unforgeable and satisfies the unlinkability property. We show that the proposed system is more efficient, in terms of the computation and communication cost, than the compared payment systems (Eslami et al. in Electron Commer Res Appl 10:59–66, 2011; Chen et al. in Electron Commer Res Appl 10:279–287, 2011; Liu et al. in Proceedings of second European PKI workshop: research and applications. Lecture notes in computer science, vol 3545, pp 206–214, 2005 and Chen et al. in Electron Commer Res Appl 10:673–682, 2011) for a customer who withdraws and spends an e-coin and for the merchant who verifies an electronic coin. Also, the proposed e-cash system is useful for the electronic transactions when the connection between the bank and the merchant is not available during the payment protocol. This means a less bandwidth of the payment protocol and then increases the speed of the electronic transaction.  相似文献   

2.
XGC1 and M3D-C 1 are two fusion plasma simulation codes being developed at Princeton Plasma Physics Laboratory. XGC1 uses the particle-in-cell method to simulate gyrokinetic neoclassical physics and turbulence (Chang et al. Phys Plasmas 16(5):056108, 2009; Ku et al. Nucl Fusion 49:115021, 2009; Admas et al. J Phys 180(1):012036, 2009). M3D-\(C^1\) solves the two-fluid resistive magnetohydrodynamic equations with the \(C^1\) finite elements (Jardin J comput phys 200(1):133–152, 2004; Jardin et al. J comput Phys 226(2):2146–2174, 2007; Ferraro and Jardin J comput Phys 228(20):7742–7770, 2009; Jardin J comput Phys 231(3):832–838, 2012; Jardin et al. Comput Sci Discov 5(1):014002, 2012; Ferraro et al. Sci Discov Adv Comput, 2012; Ferraro et al. International sherwood fusion theory conference, 2014). This paper presents the software tools and libraries that were combined to form the geometry and automatic meshing procedures for these codes. Specific consideration has been given to satisfy the mesh configuration and element shape quality constraints of XGC1 and M3D-\(C^1\).  相似文献   

3.
Very recently, the experimental demonstration of quantum secure direct communication (QSDC) with state-of-the-art atomic quantum memory has been reported (Zhang et al. in Phys Rev Lett 118:220501, 2017). Quantum dialogue (QD) falls under QSDC where the secrete messages are communicated simultaneously between two legitimate parties. The successful experimental demonstration of QSDC opens up the possibilities for practical implementation of QD protocols. Thus, it is necessary to analyze the practical security issues of QD protocols for future implementation. Since the very first proposal for QD by Nguyen (Phys Lett A 328:6–10, 2004), a large number of variants and extensions have been presented till date. However, all of those leak half of the secret bits to the adversary through classical communications of the measurement results. In this direction, motivated by the idea of Lo et al. (Phys Rev Lett 108:130503, 2012), we propose a measurement device-independent quantum dialogue scheme which is resistant to such information leakage as well as side-channel attacks. In the proposed protocol, Alice and Bob, two legitimate parties, are allowed to prepare the states only. The states are measured by an untrusted third party who may himself behave as an adversary. We show that our protocol is secure under this adversarial model. The current protocol does not require any quantum memory, and thus, it is inherently robust against memory attacks. Such robustness might not be guaranteed in the QSDC protocol with quantum memory (Zhang et al. 2017).  相似文献   

4.
5.
We use self-reduction methods to prove strong information lower bounds on two of the most studied functions in the communication complexity literature: Gap Hamming Distance (GHD) and Inner Product (IP). In our first result we affirm the conjecture that the information cost of GHD is linear even under the uniform distribution, which strengthens the Ω(n) bound recently shown by Kerenidis et al. (2012), and answers an open problem from Chakrabarti et al. (2012). In our second result we prove that the information cost of IPn is arbitrarily close to the trivial upper bound n as the permitted error tends to zero, again strengthening the Ω(n) lower bound recently proved by Braverman and Weinstein (Electronic Colloquium on Computational Complexity (ECCC) 18, 164 2011). Our proofs demonstrate that self-reducibility makes the connection between information complexity and communication complexity lower bounds a two-way connection. Whereas numerous results in the past (Chakrabarti et al. 2001; Bar-Yossef et al. J. Comput. Syst. Sci. 68(4), 702–732 2004; Barak et al. 2010) used information complexity techniques to derive new communication complexity lower bounds, we explore a generic way in which communication complexity lower bounds imply information complexity lower bounds in a black-box manner.  相似文献   

6.
The aim of Content-based Image Retrieval (CBIR) is to find a set of images that best match the query based on visual features. Most existing CBIR systems find similar images in low level features, while Text-based Image Retrieval (TBIR) systems find images with relevant tags regardless of contents in the images. Generally, people are more interested in images with similarity both in contours and high-level concepts. Therefore, we propose a new strategy called Iterative Search to meet this requirement. It mines knowledge from the similar images of original queries, in order to compensate for the missing information in feature extraction process. To evaluate the performance of Iterative Search approach, we apply this method to four different CBIR systems (HOF Zhou et al. in ACM international conference on multimedia, 2012; Zhou and Zhang in Neural information processing—international conference, ICONIP 2011, Shanghai, 2011, HOG Dalal and Triggs in IEEE computer society conference on computer vision pattern recognition, 2005, GIST Oliva and Torralba in Int J Comput Vision 42:145–175, 2001 and CNN Krizhevsky et al. in Adv Neural Inf Process Syst 25:2012, 2012) in our experiments. The results show that Iterative Search improves the performance of original CBIR features by about \(20\%\) on both the Oxford Buildings dataset and the Object Sketches dataset. Meanwhile, it is not restricted to any particular visual features.  相似文献   

7.
Some numerical algorithms for elliptic eigenvalue problems are proposed, analyzed, and numerically tested. The methods combine advantages of the two-grid algorithm (Xu and Zhou in Math Comput 70(233):17–25, 2001), the two-space method (Racheva and Andreev in Comput Methods Appl Math 2:171–185, 2002), the shifted inverse power method (Hu and Cheng in Math Comput 80:1287–1301, 2011; Yang and Bi in SIAM J Numer Anal 49:1602–1624, 2011), and the polynomial preserving recovery enhancing technique (Naga et al. in SIAM J Sci Comput 28:1289–1300, 2006). Our new algorithms compare favorably with some existing methods and enjoy superconvergence property.  相似文献   

8.
The latest-generation earth observation instruments on airborne and satellite platforms are currently producing an almost continuous high-dimensional data stream. This exponentially growing data poses a new challenge for real-time image processing and recognition. Making full and effective use of the spectral information and spatial structure information of high-resolution remote sensing image is the key to the processing and recognition of high-resolution remote sensing data. In this paper, the adaptive multipoint moment estimation (AMME) stochastic optimization algorithm is proposed for the first time by using the finite lower-order moments and adding the estimating points. This algorithm not only reduces the probability of local optimum in the learning process, but also improves the convergence rate of the convolutional neural network (Lee Cun et al. in Advances in neural information processing systems, 1990). Second, according to the remote sensing image with characteristics of complex background and small sensitive targets, and by automatic discovery, locating small targets, and giving high weights, we proposed a feature extraction method named weighted pooling to further improve the performance of real-time image recognition. We combine the AMME and weighted pooling with the spatial pyramid representation (Harada et al. in Comput Vis Pattern Recognit 1617–1624, 2011) algorithm to form a new, multiscale, and multilevel real-time image recognition model and name it weighted spatial pyramid networks (WspNet). At the end, we use the MNIST, ImageNet, and natural disasters under remote sensing data sets to test WspNet. Compared with other real-time image recognition models, WspNet achieve a new state of the art in terms of convergence rate and image feature extraction compared with conventional stochastic gradient descent method [like AdaGrad, AdaDelta and Adam (Zeiler in Comput Sci, 2012; Kingma and Ba in Comput Sci, 2014; Duchi et al. in J Mach Learn Res 12(7):2121–2159, 2011] and pooling method [like max-pooling, avg-pooling and stochastic-pooling (Zeiler and Fergus in stochastic-pooling for regularization of deep convolutional neural networks, 2013)].  相似文献   

9.
We propose a new computing model called chemical reaction automata (CRAs) as a simplified variant of reaction automata (RAs) studied in recent literature (Okubo in RAIRO Theor Inform Appl 48:23–38 2014; Okubo et al. in Theor Comput Sci 429:247–257 2012a, Theor Comput Sci 454:206–221 2012b). We show that CRAs in maximally parallel manner are computationally equivalent to Turing machines, while the computational power of CRAs in sequential manner coincides with that of the class of Petri nets, which is in marked contrast to the result that RAs (in both maximally parallel and sequential manners) have the computing power of Turing universality (Okubo 2014; Okubo et al. 2012a). Intuitively, CRAs are defined as RAs without inhibitor functioning in each reaction, providing an offline model of computing by chemical reaction networks (CRNs). Thus, the main results in this paper not only strengthen the previous result on Turing computability of RAs but also clarify the computing powers of inhibitors in RA computation.  相似文献   

10.
Intuitionistic fuzzy set is capable of handling uncertainty with counterpart falsities which exist in nature. Proximity measure is a convenient way to demonstrate impractical significance of values of memberships in the intuitionistic fuzzy set. However, the related works of Pappis (Fuzzy Sets Syst 39(1):111–115, 1991), Hong and Hwang (Fuzzy Sets Syst 66(3):383–386, 1994), Virant (2000) and Cai (IEEE Trans Fuzzy Syst 9(5):738–750, 2001) did not model the measure in the context of the intuitionistic fuzzy set but in the Zadeh’s fuzzy set instead. In this paper, we examine this problem and propose new notions of δ-equalities for the intuitionistic fuzzy set and δ-equalities for intuitionistic fuzzy relations. Two fuzzy sets are said to be δ-equal if they are equal to an extent of δ. The applications of δ-equalities are important to fuzzy statistics and fuzzy reasoning. Several characteristics of δ-equalities that were not discussed in the previous works are also investigated. We apply the δ-equalities to the application of medical diagnosis to investigate a patient’s diseases from symptoms. The idea is using δ-equalities for intuitionistic fuzzy relations to find groups of intuitionistic fuzzified set with certain equality or similar degrees then combining them. Numerical examples are given to illustrate validity of the proposed algorithm. Further, we conduct experiments on real medical datasets to check the efficiency and applicability on real-world problems. The results obtained are also better in comparison with 10 existing diagnosis methods namely De et al. (Fuzzy Sets Syst 117:209–213, 2001), Samuel and Balamurugan (Appl Math Sci 6(35):1741–1746, 2012), Szmidt and Kacprzyk (2004), Zhang et al. (Procedia Eng 29:4336–4342, 2012), Hung and Yang (Pattern Recogn Lett 25:1603–1611, 2004), Wang and Xin (Pattern Recogn Lett 26:2063–2069, 2005), Vlachos and Sergiadis (Pattern Recogn Lett 28(2):197–206, 2007), Zhang and Jiang (Inf Sci 178(6):4184–4191, 2008), Maheshwari and Srivastava (J Appl Anal Comput 6(3):772–789, 2016) and Support Vector Machine (SVM).  相似文献   

11.
Robust and accurate detection of the pupil position is a key building block for head-mounted eye tracking and prerequisite for applications on top, such as gaze-based human–computer interaction or attention analysis. Despite a large body of work, detecting the pupil in images recorded under real-world conditions is challenging given significant variability in the eye appearance (e.g., illumination, reflections, occlusions, etc.), individual differences in eye physiology, as well as other sources of noise, such as contact lenses or make-up. In this paper we review six state-of-the-art pupil detection methods, namely ElSe (Fuhl et al. in Proceedings of the ninth biennial ACM symposium on eye tracking research & applications, ACM. New York, NY, USA, pp 123–130, 2016), ExCuSe (Fuhl et al. in Computer analysis of images and patterns. Springer, New York, pp 39–51, 2015), Pupil Labs (Kassner et al. in Adjunct proceedings of the 2014 ACM international joint conference on pervasive and ubiquitous computing (UbiComp), pp 1151–1160, 2014. doi: 10.1145/2638728.2641695), SET (Javadi et al. in Front Neuroeng 8, 2015), Starburst (Li et al. in Computer vision and pattern recognition-workshops, 2005. IEEE Computer society conference on CVPR workshops. IEEE, pp 79–79, 2005), and ?wirski (?wirski et al. in Proceedings of the symposium on eye tracking research and applications (ETRA). ACM, pp 173–176, 2012. doi: 10.1145/2168556.2168585). We compare their performance on a large-scale data set consisting of 225,569 annotated eye images taken from four publicly available data sets. Our experimental results show that the algorithm ElSe (Fuhl et al. 2016) outperforms other pupil detection methods by a large margin, offering thus robust and accurate pupil positions on challenging everyday eye images.  相似文献   

12.
The objective of this paper is to focus on one of the “building blocks” of additive manufacturing technologies, namely selective laser-processing of particle-functionalized materials. Following a series of work in Zohdi (Int J Numer Methods Eng 53:1511–1532, 2002; Philos Trans R Soc Math Phys Eng Sci 361(1806):1021–1043, 2003; Comput Methods Appl Mech Eng 193(6–8):679–699, 2004; Comput Methods Appl Mech Eng 196:3927–3950, 2007; Int J Numer Methods Eng 76:1250–1279, 2008; Comput Methods Appl Mech Eng 199:79–101, 2010; Arch Comput Methods Eng 1–17. doi: 10.1007/s11831-013-9092-6, 2013; Comput Mech Eng Sci 98(3):261–277, 2014; Comput Mech 54:171–191, 2014; J Manuf Sci Eng ASME doi: 10.1115/1.4029327, 2015; CIRP J Manuf Sci Technol 10:77–83, 2015; Comput Mech 56:613–630, 2015; Introduction to computational micromechanics. Springer, Berlin, 2008; Introduction to the modeling and simulation of particulate flows. SIAM (Society for Industrial and Applied Mathematics), Philadelphia, 2007; Electromagnetic properties of multiphase dielectrics: a primer on modeling, theory and computation. Springer, Berlin, 2012), a laser-penetration model, in conjunction with a Finite Difference Time Domain Method using an immersed microstructure method, is developed. Because optical, thermal and mechanical multifield coupling is present, a recursive, staggered, temporally-adaptive scheme is developed to resolve the internal microstructural fields. The time step adaptation allows the numerical scheme to iteratively resolve the changing physical fields by refining the time-steps during phases of the process when the system is undergoing large changes on a relatively small time-scale and can also enlarge the time-steps when the processes are relatively slow. The spatial discretization grids are uniform and dense enough to capture fine-scale changes in the fields. The microstructure is embedded into the spatial discretization and the regular grid allows one to generate a matrix-free iterative formulation which is amenable to rapid computation, with minimal memory requirements, making it ideal for laptop computation. Numerical examples are provided to illustrate the modeling and simulation approach, which by design, is straightforward to computationally implement, in order to be easily utilized by researchers in the field. More advanced conduction models, based on thermal-relaxation, which are a key feature of fast-pulsing laser technologies, are also discussed.  相似文献   

13.
We study connectivity preserving multivalued functions (Kovalevsky in A new concept for digital geometry, shape in picture, 1994) between digital images. This notion generalizes that of continuous multivalued functions (Escribano et al. in Discrete geometry for computer imagery, lecture notes in computer science, 2008; Escribano et al. in J Math Imaging Vis 42:76–91, 2012) studied mostly in the setting of the digital plane \({\mathbb {Z}}^2\). We show that connectivity preserving multivalued functions, like continuous multivalued functions, are appropriate models for digital morphological operations. Connectivity preservation, unlike continuity, is preserved by compositions, and generalizes easily to higher dimensions and arbitrary adjacency relations.  相似文献   

14.
In this paper, a new numerical approximation is discussed for the two-dimensional distributed-order time fractional reaction–diffusion equation. Combining with the idea of weighted and shifted Grünwald difference (WSGD) approximation (Tian et al. in Math Comput 84:1703–1727, 2015; Wang and Vong in J Comput Phys 277:1–15, 2014) in time, we establish orthogonal spline collocation (OSC) method in space. A detailed analysis shows that the proposed scheme is unconditionally stable and convergent with the convergence order \(\mathscr {O}(\tau ^2+\Delta \alpha ^2+h^{r+1})\), where \(\tau , \Delta \alpha , h\) and r are, respectively the time step size, step size in distributed-order variable, space step size, and polynomial degree of space. Interestingly, we prove that the proposed WSGD-OSC scheme converges with the second-order in time, where OSC schemes proposed previously (Fairweather et al. in J Sci Comput 65:1217–1239, 2015; Yang et al. in J Comput Phys 256:824–837, 2014) can at most achieve temporal accuracy of order which depends on the order of fractional derivatives in the equations and is usually less than two. Some numerical results are also given to confirm our theoretical prediction.  相似文献   

15.
In this paper we study an economy with a high degree of financialization in which (non-financial) firms need loans from commercial banks to finance production, service debt, and make long-term investments. Along the business cycle, the economy follows a Minsky base cycle with firms traversing through the various stages of financial fragility, i.e. hedge, speculative and Ponzi finance (cf., Minsky in The financial instability hypothesis: a restatement. Hyman P Minsky archive paper, vol 180, pp 541–552, 1978; Stabilizing an unstable economy. Yale University Press, 2nd edn 2008, McGraw-Hill, New York, 1986; The financial instability hypothesis. Economics working paper archive wp74. The Jerome Levy Economics Institute of Bard College, 1992). In the speculative financial stage cash flows are insufficient to finance the repayment of principle but sufficient for paying interest, so banks are willing to roll-over credits in order to prevent loan defaults. In the Ponzi financial position even interest payments cannot be served, but banks my still be willing to keep firms alive through “extend and pretend” loans, also known as zombie-lending (Caballero et al. in Am Econ Rev 98(5):1943–1977, 2008). This lending behavior may cause credit bubbles with increasing leverage ratios. Empirical evidence suggests that recessions following such leveraging booms are more severe and can be associated to higher economic costs (Jordà et al. in J Money Credit Bank 45(s2):3–28, 2013; Schularick and Taylor in Am Econ Rev 102(2):1029–1061, 2012). We study macroprudential regulations aimed at: (i) the prevention and mitigation of credit bubbles, (ii) ensuring macro-financial stability, and (iii) limiting the ability of banks to create unsustainable debt bubbles. Our results show that limiting the credit growth by using a non-risk-weighted capital ratio has slightly positive effects, while using loan eligibility criteria such as cutting off funding to all financially unsound firms (speculative and Ponzi) has strong positive effects.  相似文献   

16.
Quantum correlations (QCs) in some separable states have been proposed as a key resource for certain quantum communication tasks and quantum computational models without entanglement. In this paper, a family of nine-parameter separable states, obtained from arbitrary mixture of two sets of bi-qubit product pure states, is considered. QCs in these separable states are studied analytically or numerically using four QC quantifiers, i.e., measurement-induced disturbance (Luo in Phys Rev A77:022301, 2008), ameliorated MID (Girolami et al. in J Phys A Math Theor 44:352002, 2011),quantum dissonance (DN) (Modi et al. in Phys Rev Lett 104:080501, 2010), and new quantum dissonance (Rulli in Phys Rev A 84:042109, 2011), respectively. First, an inherent symmetry in the concerned separable states is revealed, that is, any nine-parameter separable states concerned in this paper can be transformed to a three-parameter kernel state via some certain local unitary operation. Then, four different QC expressions are concretely derived with the four QC quantifiers. Furthermore, some comparative studies of the QCs are presented, discussed and analyzed, and some distinct features about them are exposed. We find that, in the framework of all the four QC quantifiers, the more mixed the original two pure product states, the bigger QCs the separable states own. Our results reveal some intrinsic features of QCs in separable systems in quantum information.  相似文献   

17.
In quantum information theory, symmetric informationally complete positive operator-valued measures (SIC-POVMs) are relevant to quantum state tomography [8], quantum cryptography [15], and foundational studies [16]. In general, it is hard to construct SIC-POVMs and only a few classes of them existed, as we know. Moreover, we do not know whether there exists an infinite class of them. Many researchers tried to construct approximately symmetric informationally complete positive operator-valued measures (ASIC-POVMs). In this paper, we propose two new constructions of ASIC-POVMs for prime power dimensions only by using multiplicative characters over finite fields.  相似文献   

18.
State-based formal methods [e.g. Event-B/RODIN (Abrial in Modeling in Event-B—system and software engineering. Cambridge University Press, Cambridge, 2010; Abrial et al. in Int J Softw Tools Technol Transf (STTT) 12(6):447–466, 2010)] for critical system development and verification are now well established, with track records including tool support and industrial applications. The focus of proof-based verification, in particular, is on safety properties. Liveness properties, which guarantee eventual, or converging computations of some requirements, are less well dealt with. Inductive reasoning about liveness is not explicitly supported. Liveness proofs are often complex and expensive, requiring high-skill levels on the part of the verification engineer. Fairness-based temporal logic approaches have been proposed to address this, e.g. TLA Lamport (ACM Trans Program Lang Syst 16(3):872–923, 1994) and that of Manna and Pnueli (Temporal verification of reactive systems—safety. Springer, New York, 1995). We contribute to this technology need by proposing a fairness-based method integrating temporal and first-order logic, proof and tools for modelling and verification of safety and liveness properties. The method is based on an integration of Event-B and TLA. Building on our previous work (Méry and Poppleton in Integrated formal methods, 10th international conference, IFM 2013, Turku, Finland, pp 208–222, 2013. doi: 10.1007/978-3-642-38613-8_15), we present the method via three example population protocols Angluin et al. (Distrib Comput 18(4):235–253, 2006). These were proposed as a theoretical framework for computability reasoning about Wireless Sensor Network and Mobile Ad-Hoc Network algorithms. Our examples present typical liveness and convergence requirements. We prove convergence results for the examples by integrated modelling and proof with Event-B/RODIN and TLA. We exploit existing proof rules, define and apply three new proof rules; soundness proofs are also provided. During the process we observe certain repeating patterns in the proofs. These are easily identified and reused because of the explicit nature of the reasoning.  相似文献   

19.
To generate long-distance shared quantum correlations (QCs) for information processing in future quantum networks, recently we proposed the concept of QC repeater and its kernel technique named QC swapping. Besides, we extensively studied the QC swapping between two simple QC resources (i.e., a pair of Werner states) with four different methods to quantify QCs (Xie et al. in Quantum Inf Process 14:653–679, 2015). In this paper, we continue to treat the same issue by employing other three different methods associated with relative entropies, i.e., the MPSVW method (Modi et al. in Phys Rev Lett 104:080501, 2010), the Zhang method (arXiv:1011.4333 [quant-ph]) and the RS method (Rulli and Sarandy in Phys Rev A 84:042109, 2011). We first derive analytic expressions of all QCs which occur during the swapping process and then reveal their properties about monotonicity and threshold. Importantly, we find that a long-distance shared QC can be generated from two short-distance ones via QC swapping indeed. In addition, we simply compare our present results with our previous ones.  相似文献   

20.
Phononic crystals (PnC) with a specifically designed liquid-filled defect have been recently introduced as a novel sensor platform (Lucklum et al. in Sens Actuators B Chem 171–172:271–277, 2012). Sensors based on this principle feature a band gap covering the typical input span of the measurand as well as a narrow transmission peak within the band gap where the frequency of maximum transmission is governed by the measurand. This approach has been applied for determination of volumetric properties of liquids (Lucklum et al. in Sens Actuators B Chem 171–172:271–277, 2012; Oseev et al. in Sens Actuators B Chem 189:208–212, 2013; Lucklum and Li in Meas Sci Technol 20(12):124014, 2009) and has demonstrated attractive sensitivity. One way to improve sensitivity requires higher probing frequencies in the range of 100 MHz and above. In this range surface acoustic wave (SAW) devices are an established basis for sensors. We have performed first tests towards a PnC microsensors (Lucklum et al. in Towards a SAW based phononic crystal sensor platform. In: 2013 Joint European frequency and time forum and international frequency control symposium (EFTF/IFC), pp 69–72, 2013). The respective feature size of the PnC SAW sensor has dimensions in the range of 10 µm and below. Whereas those dimensions are state of the art for common MEMS materials, etching of holes and cavities in piezoelectric materials that have an aspect ratio diameter/depth is still challenging. In this contribution we describe an improved technological process able to realize considerably deep and uniform holes in a SAW substrate.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号