首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Orientation-Matching Minimization for Image Denoising and Inpainting   总被引:1,自引:0,他引:1  
In this paper, we propose an orientation-matching functional minimization for image denoising and image inpainting. Following the two-step TV-Stokes algorithm (Rahman et al. in Scale space and variational methods in computer vision, pp. 473–482, Springer, Heidelberg, 2007; Tai et al. in Image processing based on partial differential equations, pp. 3–22, Springer, Heidelberg, 2006; Bertalmio et al. in Proc. conf. comp. vision pattern rec., pp. 355–362, 2001), a regularized tangential vector field with zero divergence condition is first obtained. Then a novel approach to reconstruct the image is proposed. Instead of finding an image that fits the regularized normal direction from the first step, we propose to minimize an orientation matching cost measuring the alignment between the image gradient and the regularized normal direction. This functional yields a new nonlinear partial differential equation (PDE) for reconstructing denoised and inpainted images. The equation has an adaptive diffusivity depending on the orientation of the regularized normal vector field, providing reconstructed images which have sharp edges and smooth regions. The additive operator splitting (AOS) scheme is used for discretizing Euler-Lagrange equations. We present the results of various numerical experiments that illustrate the improvements obtained with the new functional.  相似文献   

2.
Recently, Tzeng et al. proposed a nonrepudiable threshold multi-proxy multi-signature scheme with shared verification. In their scheme, a subset of original signers can delegate the signing power to a group of the designated proxy signers in such a way that: (i) A valid proxy signature can only be generated by a subset of these proxy signers for a group of the designated verifiers. (ii) The validity of the generated proxy signature can only be verified by a subset of the designated verifiers. This article, however, will demonstrate a security leak inherent in Tzeng et al.’s scheme that any verifier can check the validity of the proxy signature by himself with no help of other verifiers. That is, Tzeng et al.’s scheme cannot achieve their claimed security requirement. Finally, we will propose an improvement to eliminate the pointed out security leak.  相似文献   

3.
One-time signature schemes rely on hash functions and are, therefore, assumed to be resistant to attacks by quantum computers. These approaches inherently raise a key management problem, as the key pair can be used only for one message. That means, for one-time signature schemes to work, the sender must deliver the verification key together with the message and the signature. Upon reception, the receiver has to verify the authenticity of the verification key before verifying the signature itself. Hash-tree based solutions tackle this problem by basing the authenticity of a large number of verification keys on the authenticity of a root key. This approach, however, causes computation, communication, and storage overhead. Due to hardware acceleration, this paper proposes, for the first time, a processor architecture which boosts the performance of a one-time signature scheme without degrading memory usage and communication properties. This architecture realizes the chained Merkle signature scheme on the basis of Winternitz one-time signature scheme. All operations, i.e., key generation, signing, and verification are implemented on an FPGA platform, which acts as a coprocessor. Timing measurements on the prototype show a performance boost of at least one order of magnitude compared to an identical software solution.  相似文献   

4.
柳毅  陈添笑 《计算机应用研究》2020,37(10):3107-3111
针对Chen等人提出的云存储数据去重方案BL-MLE的计算开销过大的问题,对其方案进行了改进,提出了一种更高效的数据去重方案。首先对BL-MLE方案进行了分析,指出其在计算效率等方面的不足;随后通过使用hash函数和标签决策树对BL-MLE的块标签生成过程以及块标签比较过程进行改进;最后,实验仿真了改进的方案。结果表明,改进后的方案在块标签比较所需次数更少,且块标签生成上时间开销更低,能更好地适应当前的云存储环境。  相似文献   

5.
Pit (Pedersen on , 2008; Pedersen and Reza in ISOLA ’06: proceedings of the second international symposium on leveraging applications of formal methods, verification and validation (ISOLA 2006), pp. 111–118, 2006) is a new language for low-level programming, designed to be a self-hosting alternative to C. The novelty is that it supports automated memory management without excluding manual memory management, and without hindering key features associated with low-level programming, such as raw pointers, inline assembly code, and precise control over execution.  相似文献   

6.
In previous works (Nakao et al., Reliab. Comput., 9(5):359–372, 2003; Watanabe et al., J. Math. Fluid Mech., 6(1):1–20, 2004), the authors considered the numerical verification method of solutions for two-dimensional heat convection problems known as Rayleigh-Bénard problem. In the present paper, to make the arguments self-contained, we first summarize these results including the basic formulation of the problem with numerical examples. Next, we will give a method to verify the bifurcation point itself, which should be an important information to clarify the global bifurcation structure, and show a numerical example. Finally, an extension to the three dimensional case will be described.  相似文献   

7.
基于Gorantla等最近提出的标准模型下可证安全的可验证加密签名,提出了一个优化的公平数字签名交换方案。签名交换双方首先交换他们的可验证加密签名,验证通过以后再交换他们的真实签名,如果其中一方不能诚实地执行协议,则另一方可求助可信任第三方以达到公平交换的目的。提出的方案具有签名长度短、计算量小等优点,可以公平且有效地实现数字签名的交换。  相似文献   

8.
Tzeng et al. proposed a new threshold multi-proxy multi-signature scheme with threshold verification. In their scheme, a subset of original signers authenticates a designated proxy group to sign on behalf of the original group. A message m has to be signed by a subset of proxy signers who can represent the proxy group. Then, the proxy signature is sent to the verifier group. A subset of verifiers in the verifier group can also represent the group to authenticate the proxy signature. Subsequently, there are two improved schemes to eliminate the security leak of Tzeng et al.’s scheme. In this paper, we have pointed out the security leakage of the three schemes and further proposed a novel threshold multi-proxy multi-signature scheme with threshold verification.  相似文献   

9.
通过对魏俊懿等人提出的一种前向安全的无证书代理盲签名方案进行安全性分析,发现该方案不能抵抗原始签名人的伪造攻击、不具有盲性。针对上述问题,提出一种改进的方案。通过对代理密钥生成过程以及盲签名过程的改进,克服了原方案存在的安全缺陷。利用将单向散列链嵌入签名的方法,保证了改进的方案具有后向安全性。而且,密钥生成中心与用户之间不需要建立可信的安全通道,节省了额外的开销。安全分析表明,改进的方案满足前向安全无证书代理盲签名方案的安全要求。  相似文献   

10.
N. Kharrat  Z. Mghazli 《Calcolo》2012,49(1):41-61
We present a posteriori-residual analysis for the approximate time-dependent Stokes model Chorin-Temam projection scheme (Chorin in Math. Comput. 23:341–353, 1969; Temam in Arch. Ration. Mech. Appl. 33:377–385, 1969). Based on the multi-step approach introduced in Bergam et al. (Math. Comput. 74(251):1117–1138, 2004), we derive error estimators, with respect to both time and space approximations, related to diffusive and incompressible parts of Stokes equations. Using a conforming finite element discretization, we prove the equivalence between error and estimators under specific conditions.  相似文献   

11.
An earlier time for inserting and/or accelerating tasks   总被引:1,自引:0,他引:1  
In a periodic real-time system scheduled by the EDF (Earliest Deadline First) algorithm (Liu and Layland, J. ACM 20(1), 40–61, 1973; Barauh, Proc. of the 27th IEEE International Real-Time Systems Symposium, 379–387, 2006; Buttazzo, J. Real-Time Syst. 29(1), 5–26, 2005), when new tasks have to be inserted into the system at run-time and/or current tasks request to increase their rates in response to internal or external events, the new sum of the utilizations after the insertion and/or acceleration should be limited, otherwise, one or more current tasks should usually be compressed (their periods being prolonged) in order to avoid overload. Buttazzo offered a time from which on this kind of adjustment can be done without causing any deadline miss in the system (Buttazzo et al., IEEE Trans. Comput. 51(3), 289–302, 2002). It is, however, not early enough. In this paper, an earlier time is given and formally proved.
Qian GuangmingEmail:
  相似文献   

12.
Unique Fixpoint Induction (UFI) is the chief inference rule to prove the equivalence of recursive processes in the Calculus of Communicating Systems (CCS) (Milner 1989). It plays a major role in the equational approach to verification. Equational verification is of special interest as it offers theoretical advantages in the analysis of systems that communicate values, have infinite state space or show parameterised behaviour. We call these kinds of systems VIPSs. VIPSs is the acronym of Value-passing, Infinite-State and Parameterised Systems. Automating the application of UFI in the context of VIPSs has been neglected. This is both because many VIPSs are given in terms of recursive function symbols, making it necessary to carefully apply induction rules other than UFI, and because proving that one VIPS process constitutes a fixpoint of another involves computing a process substitution, mapping states of one process to states of the other, that often is not obvious. Hence, VIPS verification is usually turned into equation solving (Lin 1995a). Existing tools for this proof task, such as VPAM (Lin 1993), are highly interactive. We introduce a method that automates the use of UFI. The method uses middle-out reasoning (Bundy et al. 1990a) and, so, is able to apply the rule even without elaborating the details of the application. The method introduces meta-variables to represent those bits of the processes’ state space that, at application time, were not known, hence, changing from equation verification to equation solving. Adding this method to the equation plan developed by Monroy et al. (Autom Softw Eng 7(3):263–304, 2000a), we have implemented an automatic verification planner. This planner increases the number of verification problems that can be dealt with fully automatically, thus improving upon the current degree of automation in the field. Partially supported by grants CONACyT-47557-Y and ITESM CCEM-0302-05. Partially supported by EPSRC GR/L/11724.  相似文献   

13.
Programming robot behavior remains a challenging task. While it is often easy to abstractly define or even demonstrate a desired behavior, designing a controller that embodies the same behavior is difficult, time consuming, and ultimately expensive. The machine learning paradigm offers the promise of enabling “programming by demonstration” for developing high-performance robotic systems. Unfortunately, many “behavioral cloning” (Bain and Sammut in Machine intelligence agents. London: Oxford University Press, 1995; Pomerleau in Advances in neural information processing systems 1, 1989; LeCun et al. in Advances in neural information processing systems 18, 2006) approaches that utilize classical tools of supervised learning (e.g. decision trees, neural networks, or support vector machines) do not fit the needs of modern robotic systems. These systems are often built atop sophisticated planning algorithms that efficiently reason far into the future; consequently, ignoring these planning algorithms in lieu of a supervised learning approach often leads to myopic and poor-quality robot performance. While planning algorithms have shown success in many real-world applications ranging from legged locomotion (Chestnutt et al. in Proceedings of the IEEE-RAS international conference on humanoid robots, 2003) to outdoor unstructured navigation (Kelly et al. in Proceedings of the international symposium on experimental robotics (ISER), 2004; Stentz et al. in AUVSI’s unmanned systems, 2007), such algorithms rely on fully specified cost functions that map sensor readings and environment models to quantifiable costs. Such cost functions are usually manually designed and programmed. Recently, a set of techniques has been developed that explore learning these functions from expert human demonstration. These algorithms apply an inverse optimal control approach to find a cost function for which planned behavior mimics an expert’s demonstration. The work we present extends the Maximum Margin Planning (MMP) (Ratliff et al. in Twenty second international conference on machine learning (ICML06), 2006a) framework to admit learning of more powerful, non-linear cost functions. These algorithms, known collectively as LEARCH (LEArning to seaRCH), are simpler to implement than most existing methods, more efficient than previous attempts at non-linearization (Ratliff et al. in NIPS, 2006b), more naturally satisfy common constraints on the cost function, and better represent our prior beliefs about the function’s form. We derive and discuss the framework both mathematically and intuitively, and demonstrate practical real-world performance with three applied case-studies including legged locomotion, grasp planning, and autonomous outdoor unstructured navigation. The latter study includes hundreds of kilometers of autonomous traversal through complex natural environments. These case-studies address key challenges in applying the algorithm in practical settings that utilize state-of-the-art planners, and which may be constrained by efficiency requirements and imperfect expert demonstration.
J. Andrew BagnellEmail:
  相似文献   

14.
The fuzzy set theory initiated by Zadeh (Information Control 8:338–353, 1965) was based on the real unit interval [0,1] for support of membership functions with the natural product for intersection operation. This paper proposes to extend this definition by using the more general linearly ordered semigroup structure. As Moisil (Essais sur les Logiques non Chrysippiennes. Académie des Sciences de Roumanie, Bucarest, 1972, p. 162) proposed to define Lukasiewicz logics on an abelian ordered group for truth values set, we give a simple negative answer to the question on the possibility to build a Many-valued logic on a finite abelian ordered group. In a constructive way characteristic properties are step by step deduced from the corresponding set theory to the semigroup order structure. Some results of Clifford on topological semigroups (Clifford, A.H., Proc. Amer. Math. Soc. 9:682–687, 1958; Clifford, A.H., Trans. Amer. Math. Soc. 88:80–98, 1958), Paalman de Miranda work on I-semigroups (Paalman de Miranda, A.B., Topological Semigroups. Mathematical Centre Tracts, Amsterdam, 1964) and Schweitzer, Sklar on T-norms (Schweizer, B., Sklar, A., Publ. Math. Debrecen 10:69–81, 1963; Schweizer, B., Sklar, A., Pacific J. Math. 10:313–334, 1960; Schweizer, B., Sklar, A., Publ. Math. Debrecen 8:169–186, 1961) are revisited in this framework. As a simple consequence of Faucett theorems (Proc. Amer. Math. Soc. 6:741–747, 1955), we prove how canonical properties from the fuzzy set theory point of view lead to the Zadeh choice thus giving another proof of the representation theorem of T-norms. This structural approach shall give a new perspective to tackle the question of G. Moisil about the definition of discrete Many-valued logics as approximation of fuzzy continuous ones.   相似文献   

15.
Recently, Wang et al. proposed a (t,n) threshold signature scheme with (k,l) threshold shared verification and a group-oriented authenticated encryption scheme with (k,l) threshold shared verification. However, this article will show that both the schemes violate the requirement of the (k,l) threshold shared verification. Further, two improvements are proposed to eliminate the pointed out security leaks inherent in the original schemes.  相似文献   

16.
17.
The notion of off-line/on-line digital signature scheme was introduced by Even, Goldreich and Micali. Informally such signatures schemes are used to reduce the time required to compute a signature using some kind of preprocessing. Even, Goldreich and Micali show how to realize off-line/on-line digital signature schemes by combining regular digital signatures with efficient one-time signatures. Later, Shamir and Tauman presented an alternative construction (which produces shorter signatures) obtained by combining regular signatures with chameleon hash functions. In this paper, we study off-line/on-line digital signature schemes both from a theoretic and a practical perspective. More precisely, our contribution is threefold. First, we unify the Shamir–Tauman and Even et al. approaches by showing that they can be seen as different instantiations of the same paradigm. We do this by showing that the one-time signatures needed in the Even et al. approach only need to satisfy a weak notion of security. We then show that chameleon hashing is basically a one-time signature which satisfies such a weaker security notion. As a by-product of this result, we study the relationship between one-time signatures and chameleon hashing, and we prove that a special type of chameleon hashing (which we call double-trapdoor) is actually a fully secure one-time signature. Next, we consider the task of building, in a generic fashion, threshold variants of known schemes: Crutchfield et al. proposed a generic way to construct a threshold off-line/on-line signature scheme given a threshold regular one. They applied known threshold techniques to the Shamir–Tauman construction using a specific chameleon hash function. Their solution introduces additional computational assumptions which turn out to be implied by the so-called one-more discrete logarithm assumption. Here, we propose two generic constructions that can be based on any threshold signature scheme, combined with a specific (double-trapdoor) chameleon hash function. Our constructions are efficient and can be proven secure in the standard model using only the traditional discrete logarithm assumption. Finally, we ran experimental tests to measure the difference between the real efficiency of the two known constructions for non-threshold off-line/on-line signatures. Interestingly, we show that, using some optimizations, the two approaches are comparable in efficiency and signature length.  相似文献   

18.
Information hiding method with low bit rate is important in secure communications. To reduce bit rate we propose a new embedding method in this paper based on SOC (search-order coding) compression technique. Compared to Chang et al.’s scheme in 2004, our scheme completely avoids the transform from SOC coding to OIV (original index values) coding to significantly reduce bit rate. In order to further reduce bit rate, Chang et al. proposed a reversible data hiding scheme using hybrid encoding strategies by introducing the side-match vector quantization (SMVQ) in 2013. But it needed additional 1 bit indicator to distinguish the two statuses to determine OIV is belonged to G1 or G2. This overhead gave a large burden to compression rate and could not reduce the bit rate significantly. In contrast, our scheme completely avoids this indicator. The experimental results show that the proposed method can efficiently reduce the bit rate and have the same embedding capacity compared with Chang et al.’s scheme in 2004 and Chang et al.’s scheme in 2013. Moreover, our proposed scheme can also achieve a better performance in both the embedding capacity and bit rate than other related VQ-based information hiding schemes.  相似文献   

19.
Recently Chen, [K. Chen, Signature with message recovery, Electronics Letters, 34(20) (1998) 1934], proposed a signature with message recovery. But Mitchell and Yeun [C. J. Mitchell and C. Y. Yeun, Comment - signature with message recovery, Electronics Letters, 35(3) (1999) 217] observed that Chen's scheme is only an authenticated encryption scheme and not a signature scheme as claimed. In this article, we propose a new signature scheme in the sense of Mitchell and Yeun and with message recovery feature. The designated verifier signature is introduced by Jakobsson et al. [M. Jakobsson, K. Sako, R. Impagliazzo, Designated verifier proofs and their applications, Proc. of Eurocrypt’96, LNCS 1070 (1996) pp. 143–154]. We propose a designated verifier signature scheme with non-repudiation of origin. We also give a protocol for a convertible designated verifier signature scheme with non-repudiation of origin. Both of these schemes are based on our proposed signature scheme with message recovery.  相似文献   

20.
An emerging trend in DNA computing consists of the algorithmic analysis of new molecular biology technologies, and in general of more effective tools to tackle computational biology problems. An algorithmic understanding of the interaction between DNA molecules becomes the focus of some research which was initially addressed to solve mathematical problems by processing data within biomolecules. In this paper a novel mechanism of DNA recombination is discussed, that turned out to be a good implementation key to develop new procedures for DNA manipulation (Franco et al., DNA extraction by cross pairing PCR, 2005; Franco et al., DNA recombination by XPCR, 2006; Manca and Franco, Math Biosci 211:282–298, 2008). It is called XPCR as it is a variant of the polymerase chain reaction (PCR), which was a revolution in molecular biology as a technique for cyclic amplification of DNA segments. A few DNA algorithms are proposed, that were experimentally proven in different contexts, such as, mutagenesis (Franco, Biomolecular computing—combinatorial algorithms and laboratory experiments, 2006), multiple concatenation, gene driven DNA extraction (Franco et al., DNA extraction by cross pairing PCR, 2005), and generation of DNA libraries (Franco et al., DNA recombination by XPCR, 2006), and some related ongoing work is outlined.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号