This article deals with the issue of input-to-state stabilization for recurrent neural networks with delay and external disturbance. The goal is to design a suitable weight-learning law to make the considered network input-to-state stable with a predefined -gain. Based on the solution of linear matrix inequalities, two schemes for the desired learning law are presented via using decay-rate-dependent and decay-rate-independent Lyapunov functionals, respectively. It is shown that, in the absence of external disturbance, the proposed learning law also guarantees the exponential stability of the network. To illustrate the applicability of the present weight-learning law, two numerical examples with simulations are given. 相似文献
Emerging privacy-preserving technologies help protect sensitive data during application executions. Recently, the secure two-party computing (TPC) scheme has demonstrated its potential, especially for the secure model inference of a deep learning application by protecting both the user input data and the model parameters. Nevertheless, existing TPC protocols incur excessive communications during the program execution, which lengthens the execution time. In this work, we propose the precomputing scheme, POPS, to address the problem, which is done by shifting the required communications from during the execution to the time prior to the execution. Particular, the multiplication triple generation is computed beforehand with POPS to remove the overhead at runtime. We have analyzed the TPC protocols to ensure that the precomputing scheme conforms the existing secure protocols. Our results show that POPS takes a step forward in the secure inference by delivering up to \(20\times \) and \(5\times \) speedups against the prior work for the microbenchmark and the convolutional neural network experiments, respectively.