首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 187 毫秒
1.
At ToSC 2019, Ankele et al. proposed a novel idea for constructing zero-correlation linear distinguishers in a related-tweakey model. This paper further clarifies this principle and gives a search model for zero-correlation distinguishers. As a result, for the first time, the authors construct 15-round and 17-round zero-correlation linear distinguishers for SKINNY-n-2n and SKINNY-n-3n, respectively, which are both two rounds longer than Anekele et al.’s. Based on these distinguishers, the paper presents related-tweakey zero-correlation linear attacks on 22-round SKINNY-n-2n and 26-round SKINNY-n-3n, respectively.  相似文献   

2.
In this paper, we consider the k-prize-collecting minimum vertex cover problem with submodular penalties, which generalizes the well-known minimum vertex cover problem, minimum partial vertex cover problem and minimum vertex cover problem with submodular penalties. We are given a cost graph G=(V,E;c) and an integer k. This problem determines a vertex set SV such that S covers at least k edges. The objective is to minimize the total cost of the vertices in S plus the penalty of the uncovered edge set, where the penalty is determined by a submodular function. We design a two-phase combinatorial algorithm based on the guessing technique and the primal-dual framework to address the problem. When the submodular penalty cost function is normalized and nondecreasing, the proposed algorithm has an approximation factor of 3. When the submodular penalty cost function is linear, the approximation factor of the proposed algorithm is reduced to 2, which is the best factor if the unique game conjecture holds.  相似文献   

3.
The problem of subgraph matching is one fundamental issue in graph search, which is NP-Complete problem. Recently, subgraph matching has become a popular research topic in the field of knowledge graph analysis, which has a wide range of applications including question answering and semantic search. In this paper, we study the problem of subgraph matching on knowledge graph. Specifically, given a query graph q and a data graph G, the problem of subgraph matching is to conduct all possible subgraph isomorphic mappings of q on G. Knowledge graph is formed as a directed labeled multi-graph having multiple edges between a pair of vertices and it has more dense semantic and structural features than general graph. To accelerate subgraph matching on knowledge graph, we propose a novel subgraph matching algorithm based on subgraph index for knowledge graph, called as F G q T-Match. The subgraph matching algorithm consists of two key designs. One design is a subgraph index of matching-driven flow graph ( F G q T), which reduces redundant calculations in advance. Another design is a multi-label weight matrix, which evaluates a near-optimal matching tree for minimizing the intermediate candidates. With the aid of these two key designs, all subgraph isomorphic mappings are quickly conducted only by traversing F G q T. Extensive empirical studies on real and synthetic graphs demonstrate that our techniques outperform the state-of-the-art algorithms.  相似文献   

4.
Secure k-Nearest Neighbor (k-NN) query aims to find k nearest data of a given query from an encrypted database in a cloud server without revealing privacy to the untrusted cloud and has wide applications in many areas, such as privacy-preserving machine learning and secure biometric identification. Several solutions have been put forward to solve this challenging problem. However, the existing schemes still suffer from various limitations in terms of efficiency and flexibility. In this paper, we propose a new encrypt-then-index strategy for the secure k-NN query, which can simultaneously achieve sub-linear search complexity (efficiency) and support dynamical update over the encrypted database (flexibility). Specifically, we propose a novel algorithm to transform the encrypted database and encrypted query points in the cloud. By indexing the transformed database using spatial data structures such as the R-tree index, our strategy enables sub-linear complexity for secure k-NN queries and allows users to dynamically update the encrypted database. To the best of our knowledge, the proposed strategy is the first to simultaneously provide these two properties. Through theoretical analysis and extensive experiments, we formally prove the security and demonstrate the efficiency of our scheme.  相似文献   

5.
Reactive synthesis is a technique for automatic generation of a reactive system from a high level specification. The system is reactive in the sense that it reacts to the inputs from the environment. The specification is in general given as a linear temporal logic (LTL) formula. The behaviour of the system interacting with the environment can be represented as a game in which the system plays against the environment. Thus, a problem of reactive synthesis is commonly treated as solving such a game with the specification as the winning condition. Reactive synthesis has been thoroughly investigated for more two decades. A well-known challenge is to deal with the complex uncertainty of the environment. We understand that a major issue is due to the lack of a sufficient treatment of probabilistic properties in the traditional models. For example, a two-player game defined by a standard Kriple structure does not consider probabilistic transitions in reaction to the uncertain physical environment; and a Markov Decision Process (MDP) in general does not explicitly separate the system from its environment and it does not describe the interaction between system and the environment. In this paper, we propose a new and more general model which combines the two-player game and the MDP. Furthermore, we study probabilistic reactive synthesis for the games of General Reactivity of Rank 1 (i.e., GR(1)) defined in this model. More specifically, we present an algorithm, which for given model M, a location s and a GR(1) specification P, determines the strategy for each player how to maximize/minimize the probabilities of the satisfaction of P at location s. We use an example to describe the model of probabilistic games and demonstrate our algorithm.  相似文献   

6.
With the increasing amount of data, there is an urgent need for efficient sorting algorithms to process large data sets. Hardware sorting algorithms have attracted much attention because they can take advantage of different hardware’s parallelism. But the traditional hardware sort accelerators suffer “memory wall” problems since their multiple rounds of data transmission between the memory and the processor. In this paper, we utilize the in-situ processing ability of the ReRAM crossbar to design a new ReCAM array that can process the matrix-vector multiplication operation and the vector-scalar comparison in the same array simultaneously. Using this designed ReCAM array, we present ReCSA, which is the first dedicated ReCAM-based sort accelerator. Besides hardware designs, we also develop algorithms to maximize memory utilization and minimize memory exchanges to improve sorting performance. The sorting algorithm in ReCSA can process various data types, such as integer, float, double, and strings. We also present experiments to evaluate the performance and energy efficiency against the state-of-the-art sort accelerators. The experimental results show that ReCSA has 90.92×, 46.13×, 27.38×, 84.57×, and 3.36× speedups against CPU-, GPU-, FPGA-, NDP-, and PIM-based platforms when processing numeric data sets. ReCSA also has 24.82×, 32.94×, and 18.22× performance improvement when processing string data sets compared with CPU-, GPU-, and FPGA-based platforms.  相似文献   

7.
Local holographic transformations were introduced by Cai et al., and local affine functions, an extra tractable class, were derived by it in #CSP2. In the present paper, we not only generalize local affine functions to #CSPd for general d, but also give new tractable classes by combining local holographic transformations with global holographic transformations. Moreover, we show how to use local holographic transformations to prove hardness. This is of independent interests in the complexity classification of counting problems.  相似文献   

8.
On-line transaction processing (OLTP) systems rely on transaction logging and quorum-based consensus protocol to guarantee durability, high availability and strong consistency. This makes the log manager a key component of distributed database management systems (DDBMSs). The leader of DDBMSs commonly adopts a centralized logging method to writing log entries into a stable storage device and uses a constant log replication strategy to periodically synchronize its state to followers. With the advent of new hardware and high parallelism of transaction processing, the traditional centralized design of logging limits scalability, and the constant trigger condition of replication can not always maintain optimal performance under dynamic workloads. In this paper, we propose a new log manager named Salmo with scalable logging and adaptive replication for distributed database systems. The scalable logging eliminates centralized contention by utilizing a highly concurrent data structure and speedy log hole tracking. The kernel of adaptive replication is an adaptive log shipping method, which dynamically adjusts the number of log entries transmitted between leader and followers based on the real-time workload. We implemented and evaluated Salmo in the open-sourced transaction processing systems Cedar and DBx1000. Experimental results show that Salmo scales well by increasing the number of working threads, improves peak throughput by 1.56× and reduces latency by more than 4× over log replication of Raft, and maintains efficient and stable performance under dynamic workloads all the time.  相似文献   

9.
Block synchronization is an essential component of blockchain systems. Traditionally, blockchain systems tend to send all the transactions from one node to another for synchronization. However, such a method may lead to an extremely high network bandwidth overhead and significant transmission latency. It is crucial to speed up such a block synchronization process and save bandwidth consumption. A feasible solution is to reduce the amount of data transmission in the block synchronization process between any pair of peers. However, existing methods based on the Bloom filter or its variants still suffer from multiple roundtrips of communications and significant synchronization delay. In this paper, we propose a novel protocol named Gauze for fast block synchronization. It utilizes the Cuckoo filter (CF) to discern the transactions in the receiver’s mempool and the block to verify, providing an efficient solution to the problem of set reconciliation in the P2P (Peer-to-Peer Network) network. By up to two rounds of exchanging and querying the CFs, the sending node can acknowledge whether the transactions in a block are contained by the receiver’s mempool or not. Based on this message, the sender only needs to transfer the missed transactions to the receiver, which speeds up the block synchronization and saves precious bandwidth resources. The evaluation results show that Gauze outperforms existing methods in terms of the average processing latency (about 10× lower than Graphene) and the total synchronization space cost (about 10× lower than Compact Blocks) in different scenarios.  相似文献   

10.
Oblivious Cross-Tags (OXT) [1] is the first efficient searchable encryption (SE) protocol for conjunctive queries in a single-writer single-reader framework. However, it also has a trade-off between security and efficiency by leaking partial database information to the server. Recent attacks on these SE schemes show that the leakages from these SE schemes can be used to recover the content of queried keywords. To solve this problem, Lai et al. [2] propose Hidden Cross-Tags (HXT), which reduces the access pattern leakage from Keyword Pair Result Pattern (KPRP) to Whole Result Pattern (WRP). However, the WRP leakage can also be used to recover some additional contents of queried keywords. This paper proposes Improved Cross-Tags (IXT), an efficient searchable encryption protocol that achieves access and searches pattern hiding based on the labeled private set intersection. We also prove the proposed labeled private set intersection (PSI) protocol is secure against semi-honest adversaries, and IXT is L-semi-honest secure (L is leakage function). Finally, we do experiments to compare IXT with HXT. The experimental results show that the storage overhead and computation overhead of the search phase at the client-side in IXT is much lower than those in HXT. Meanwhile, the experimental results also show that IXT is scalable and can be applied to various sizes of datasets.  相似文献   

11.
Differential privacy has recently become a widely recognized strict privacy protection model of data release. Differential privacy histogram publishing can directly show the statistical data distribution under the premise of ensuring user privacy for data query, sharing, and analysis. The dynamic data release is a study with a wide range of current industry needs. However, the amount of data varies considerably over different periods. Unreasonable data processing will result in the risk of users’ information leakage and unavailability of the data. Therefore, we designed a differential privacy histogram publishing method based on the dynamic sliding window of LSTM (DPHP-DL), which can improve data availability on the premise of guaranteeing data privacy. DPHP-DL is integrated by DSW-LSTM and DPHK+. DSW-LSTM updates the size of sliding windows based on data value prediction via long short-term memory (LSTM) networks, which evenly divides the data stream into several windows. DPHK+ heuristically publishes non-isometric histograms based on k-mean++ clustering of automatically obtaining the optimal K, so as to achieve differential privacy histogram publishing of dynamic data. Extensive experiments on real-world dynamic datasets demonstrate the superior performance of the DPHP-DL.  相似文献   

12.
王梅  许传海  刘勇 《计算机应用》2021,41(12):3462-3467
多核学习方法是一类重要的核学习方法,但大多数多核学习方法存在如下问题:多核学习方法中的基核函数大多选择传统的具有浅层结构的核函数,在处理数据规模大且分布不平坦的问题时表示能力较弱;现有的多核学习方法的泛化误差收敛率大多为O1/n,收敛速度较慢。为此,提出了一种基于神经正切核(NTK)的多核学习方法。首先,将具有深层次结构的NTK作为多核学习方法的基核函数,从而增强多核学习方法的表示能力。然后,根据主特征值比例度量证明了一种收敛速率可达O1/n的泛化误差界;在此基础上,结合核对齐度量设计了一种全新的多核学习算法。最后,在多个数据集上进行了实验,实验结果表明,相比Adaboost和K近邻(KNN)等分类算法,新提出的多核学习算法具有更高的准确率和更好的表示能力,也验证了所提方法的可行性与有效性。  相似文献   

13.
The flourish of deep learning frameworks and hardware platforms has been demanding an efficient compiler that can shield the diversity in both software and hardware in order to provide application portability. Among the existing deep learning compilers, TVM is well known for its efficiency in code generation and optimization across diverse hardware devices. In the meanwhile, the Sunway many-core processor renders itself as a competitive candidate for its attractive computational power in both scientific computing and deep learning workloads. This paper combines the trends in these two directions. Specifically, we propose swTVM that extends the original TVM to support ahead-of-time compilation for architecture requiring cross-compilation such as Sunway. In addition, we leverage the architecture features during the compilation such as core group for massive parallelism, DMA for high bandwidth memory transfer and local device memory for data locality, in order to generate efficient codes for deep learning workloads on Sunway. The experiment results show that the codes generated by swTVM achieve 1.79× improvement of inference latency on average compared to the state-of-the-art deep learning framework on Sunway, across eight representative benchmarks. This work is the first attempt from the compiler perspective to bridge the gap of deep learning and Sunway processor particularly with productivity and efficiency in mind. We believe this work will encourage more people to embrace the power of deep learning and Sunway many-core processor.  相似文献   

14.
刘帅  蒋林  李远成  山蕊  朱育琳  王欣 《计算机应用》2022,42(5):1524-1530
针对大规模多输入多输出(MIMO)系统中,最小均方误差(MMSE)检测算法在可重构阵列结构上适应性差、计算复杂度高和运算效率低的问题,基于项目组开发的可重构阵列处理器,提出了一种基于MMSE算法的并行映射方法。首先,利用Gram矩阵计算时较为简单的数据依赖关系,设计时间上和空间上可以高度并行的流水线加速方案;其次,根据MMSE算法中Gram矩阵计算和匹配滤波计算模块相对独立的特点,设计模块化并行映射方案;最后,基于Xilinx Virtex-6开发板对映射方案进行实现并统计其性能。实验结果表明,该方法在MIMO规模为128×4128×8128×16的正交相移键控(QPSK)上行链路中,加速比分别2.80、4.04和5.57;在128×16的大规模MIMO系统中,可重构阵列处理器比专用硬件减少了42.6%的资源消耗。  相似文献   

15.
为评价阴影消除植被指数(Shadow-Eliminated Vegetation Index, SEVI)对常用十米级不同空间分辨率遥感影像的地形阴影消除效果,采用2019年1月24~25日过境的Sentinel S2B(10 m)、GF-1(16 m)、Landsat 8 OLI(30 m)、GF-4(50 m)4种空间分辨率多光谱影像,计算了基于地表反射率的NDVI、SEVI和基于SCS+C模型校正后反射率的NDVI。评价方法包括植被指数数值分析、本影和落影相对误差分析、变异系数分析、植被指数与太阳入射角余弦值(cosi)散点图分析等。评价结果显示:4种空间分辨率的SEVI在本影相对误差分别为2.172%、1.422%、1.351%、1.060%;对应落影的相对误差分别为2.598%、2.801%、3.795%、2.711%;相应SEVI与cosi的决定系数分别为0.017 3、0.010 7、0.001 1、0.000 1;相应变异系数分别为10.036%、9.070%、8.051%、1.631%。研究结果表明,SEVI对10~50 m不同空间分辨率遥感影像的地形阴影校正效果良好,优于用SCS+C模型校正后的地表反射率计算的NDVI;遥感影像的地形阴影效应随着空间分辨率降低而减弱。  相似文献   

16.
A k-CNF (conjunctive normal form) formula is a regular (k, s)-CNF one if every variable occurs s times in the formula, where k≥2 and s>0 are integers. Regular (3, s)- CNF formulas have some good structural properties, so carrying out a probability analysis of the structure for random formulas of this type is easier than conducting such an analysis for random 3-CNF formulas. Some subclasses of the regular (3, s)-CNF formula have also characteristics of intractability that differ from random 3-CNF formulas. For this purpose, we propose strictly d-regular (k, 2s)-CNF formula, which is a regular (k, 2s)-CNF formula for which d≥0 is an even number and each literal occurs sd2 or s+d2 times (the literals from a variable x are x and ¬x, where x is positive and ¬x is negative). In this paper, we present a new model to generate strictly d-regular random (k, 2s)-CNF formulas, and focus on the strictly d-regular random (3, 2s)-CNF formulas. Let F be a strictly d-regular random (3, 2s)-CNF formula such that 2s>d. We show that there exists a real number s0 such that the formula F is unsatisfiable with high probability when s>s0, and present a numerical solution for the real number s0. The result is supported by simulated experiments, and is consistent with the existing conclusion for the case of d= 0. Furthermore, we have a conjecture: for a given d, the strictly d-regular random (3, 2s)-SAT problem has an SAT-UNSAT (satisfiable-unsatisfiable) phase transition. Our experiments support this conjecture. Finally, our experiments also show that the parameter d is correlated with the intractability of the 3-SAT problem. Therefore, our research maybe helpful for generating random hard instances of the 3-CNF formula.  相似文献   

17.
Dimensionality reduction (DR) methods based on sparse representation as one of the hottest research topics have achieved remarkable performance in many applications in recent years. However, it’s a challenge for existing sparse representation based methods to solve nonlinear problem due to the limitations of seeking sparse representation of data in the original space. Motivated by kernel tricks, we proposed a new framework called empirical kernel sparse representation (EKSR) to solve nonlinear problem. In this framework, nonlinear separable data are mapped into kernel space in which the nonlinear similarity can be captured, and then the data in kernel space is reconstructed by sparse representation to preserve the sparse structure, which is obtained by minimizing a ?1 regularization-related objective function. EKSR provides new insights into dimensionality reduction and extends two models: 1) empirical kernel sparsity preserving projection (EKSPP), which is a feature extraction method based on sparsity preserving projection (SPP); 2) empirical kernel sparsity score (EKSS), which is a feature selection method based on sparsity score (SS). Both of the two methods can choose neighborhood automatically as the natural discriminative power of sparse representation. Compared with several existing approaches, the proposed framework can reduce computational complexity and be more convenient in practice.  相似文献   

18.
针对脉冲噪声干扰环境下传统稀疏自适应滤波稳态性能差,甚至无法收敛等问题,同时为提高稀疏参数辨识的精度的同时不增加过多计算代价,提出了一种基于广义最大Versoria准则(GMVC)的稀疏自适应滤波算法——带有CIM约束的GMVC(CIMGMVC)。首先,利用广义Versoria函数作为学习准则,其包含误差p阶矩的倒数形式,当脉冲干扰出现导致误差非常大时,GMVC将趋近于0,从而达到抑制脉冲噪声的目的。其次,将互相关熵诱导维度(CIM)作为稀疏惩罚约束和GMVC相结合来构建新代价函数,其中的CIM以高斯概率密度函数为基础,当选择合适核宽度时,可无限逼近于l0-范数。最后,应用梯度法推导出CIMGMVC算法,并分析了所提算法的均方收敛性。在Matlab平台上采用α-stable分布模型产生脉冲噪声进行仿真,实验结果表明所提出的CIMGMVC算法能有效地抑制非高斯脉冲噪声的干扰,在稳健性方面优于传统稀疏自适应滤波,且稳态误差低于GMVC算法。  相似文献   

19.
蒋楚钰  方李西  章宁  朱建明 《计算机应用》2022,42(11):3438-3443
针对公证人机制中存在的公证人节点职能集中以及跨链交易效率较低等问题,提出一种基于公证人组的跨链交互安全模型。首先,将公证人节点分为三类角色,即交易验证者、连接者和监督者,由交易验证组成员打包经过共识的多笔交易成一笔大的交易,并利用门限签名技术对它进行签名;其次,被确认的交易会被置于跨链待转账池中,连接者随机选取多笔交易,利用安全多方计算和同态加密等技术判断交易的真实性;最后,若打包所有符合条件的交易的哈希值真实可靠且被交易验证组验证过,则连接者可以继续执行多笔跨链交易的批处理任务,并与区块链进行信息交互。安全性分析表明,该跨链机制有助于保护信息的机密性和数据的完整性,实现数据在不出库的情况下的协同计算,保障区块链跨链系统的稳定性。与传统的跨链交互安全模型相比,所提模型的签名次数和需要分配公证人组数的复杂度从O(n)降为O(1)。  相似文献   

20.
传统的聚类方法是在数据空间进行,且聚类数据的维度较高.为了解决这两个问题,提出了一种新的二进制图像聚类方法——基于离散哈希的聚类(CDH).该框架通过L21范数实现自适应的特征选择,从而降低数据的维度;同时通过哈希方法将数据映射到二进制的汉明空间,随后,在汉明空间中对稀疏的二进制矩阵进行低秩矩阵分解,完成图像的快速聚类...  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号