首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   129篇
  免费   3篇
化学工业   9篇
机械仪表   1篇
能源动力   1篇
轻工业   4篇
无线电   22篇
一般工业技术   12篇
冶金工业   31篇
原子能技术   7篇
自动化技术   45篇
  2022年   1篇
  2021年   1篇
  2020年   3篇
  2018年   7篇
  2017年   2篇
  2016年   3篇
  2015年   3篇
  2014年   3篇
  2013年   5篇
  2012年   10篇
  2011年   3篇
  2010年   5篇
  2008年   4篇
  2007年   7篇
  2006年   4篇
  2005年   2篇
  2004年   2篇
  2003年   5篇
  2001年   4篇
  2000年   3篇
  1999年   3篇
  1998年   8篇
  1997年   2篇
  1996年   5篇
  1995年   5篇
  1994年   5篇
  1993年   4篇
  1990年   1篇
  1989年   4篇
  1988年   2篇
  1986年   1篇
  1985年   1篇
  1984年   2篇
  1983年   2篇
  1982年   1篇
  1981年   2篇
  1980年   3篇
  1979年   1篇
  1977年   1篇
  1966年   1篇
  1964年   1篇
排序方式: 共有132条查询结果,搜索用时 15 毫秒
101.
Patent users such as governments, inventors, and manufacturing organizations strive to identify the directions in which new technology is advancing, and their goal is to outline the boundaries of existing knowledge. The paper analyzes patent knowledge to identify research trends. A model based on knowledge extraction from patents and self-organizing maps for knowledge representation is presented. The model was tested on patents from the United States Patent and Trademark Office. The experiments show that the method provides both an overview of the directions of the trends and a drill-down perspective of current trends.  相似文献   
102.
An instance of the k -Steiner forest problem consists of an undirected graph G=(V,E), the edges of which are associated with non-negative costs, and a collection $\mathcal{D}=\{(s_{1},t_{1}),\ldots,(s_{d},t_{d})\}An instance of the k -Steiner forest problem consists of an undirected graph G=(V,E), the edges of which are associated with non-negative costs, and a collection D={(s1,t1),?,(sd,td)}\mathcal{D}=\{(s_{1},t_{1}),\ldots,(s_{d},t_{d})\} of distinct pairs of vertices, interchangeably referred to as demands. We say that a forest ℱ⊆G connects a demand (s i ,t i ) when it contains an s i -t i path. Given a profit k i for each demand (s i ,t i ) and a requirement parameter k, the goal is to find a minimum cost forest that connects a subset of demands whose combined profit is at least k. This problem has recently been studied by Hajiaghayi and Jain (SODA ’06), whose main contribution in this context was to relate the inapproximability of k-Steiner forest to that of the dense k -subgraph problem. However, Hajiaghayi and Jain did not provide any algorithmic result for the respective settings, and posed this objective as an important direction for future research. In this paper, we present the first non-trivial approximation algorithm for the k-Steiner forest problem, which is based on a novel extension of the Lagrangian relaxation technique. Specifically, our algorithm constructs a feasible forest whose cost is within a factor of O(min{n2/3,?d}·logd)O(\min \{n^{2/3},\sqrt{d}\}\cdot \log d) of optimal, where n is the number of vertices in the input graph and d is the number of demands. We believe that the approach illustrated in the current writing is of independent interest, and may be applicable in other settings as well.  相似文献   
103.
Bellare, Boldyreva, and O’Neill (CRYPTO ’07) initiated the study of deterministic public-key encryption as an alternative in scenarios where randomized encryption has inherent drawbacks. The resulting line of research has so far guaranteed security only for adversarially chosen-plaintext distributions that are independent of the public key used by the scheme. In most scenarios, however, it is typically not realistic to assume that adversaries do not take the public key into account when attacking a scheme. We show that it is possible to guarantee meaningful security even for plaintext distributions that depend on the public key. We extend the previously proposed notions of security, allowing adversaries to adaptively choose plaintext distributions after seeing the public key, in an interactive manner. The only restrictions we make are that: (1) plaintext distributions are unpredictable (as is essential in deterministic public-key encryption), and (2) the number of plaintext distributions from which each adversary is allowed to adaptively choose is upper bounded by \(2^{p}\), where p can be any predetermined polynomial in the security parameter and plaintext length. For example, with \(p = 0\) we capture plaintext distributions that are independent of the public key, and with \(p = O(s \log s)\) we capture, in particular, all plaintext distributions that are samplable by circuits of size s. Within our framework we present both constructions in the random oracle model based on any public-key encryption scheme, and constructions in the standard model based on lossy trapdoor functions (thus, based on a variety of number-theoretic assumptions). Previously known constructions heavily relied on the independence between the plaintext distributions and the public key for the purposes of randomness extraction. In our setting, however, randomness extraction becomes significantly more challenging once the plaintext distributions and the public key are no longer independent. Our approach is inspired by research on randomness extraction from seed-dependent distributions. Underlying our approach is a new generalization of a method for such randomness extraction, originally introduced by Trevisan and Vadhan (FOCS ’00) and Dodis (Ph.D. Thesis, MIT, ’00).  相似文献   
104.
A knowledge of the mechanism of cathodic polarisation of metals in various acids ia of practical interest for the nuclear industry. After a brief survey of research in this field, the possible ways of applying cathodic polarisation are reviewed. The most important applications include: the selective dissolution of cans from nuclear fuels, cathodic decontamination of steels, the dissolution and separation of metals and alloys.  相似文献   
105.
Alcohol and nicotine are widely abused legal substances worldwide. Relapse to alcohol or tobacco seeking and consumption after abstinence is a major clinical challenge, and is often evoked by cue-induced craving. Therefore, disruption of the memory for the cue–drug association is expected to suppress relapse. Memories have been postulated to become labile shortly after their retrieval, during a “memory reconsolidation” process. Interference with the reconsolidation of drug-associated memories has been suggested as a possible strategy to reduce or even prevent cue-induced craving and relapse. Here, we surveyed the growing body of studies in animal models and in humans assessing the effectiveness of pharmacological or behavioral manipulations in reducing relapse by interfering with the reconsolidation of alcohol and nicotine/tobacco memories. Our review points to the potential of targeting the reconsolidation of these memories as a strategy to suppress relapse to alcohol drinking and tobacco smoking. However, we discuss several critical limitations and boundary conditions, which should be considered to improve the consistency and replicability in the field, and for development of an efficient reconsolidation-based relapse-prevention therapy.  相似文献   
106.
This paper deals with the application of the convolutive version of dictionary learning to analyze in-situ audio recordings for bio-acoustics monitoring. We propose an efficient approach for learning and using a sparse convolutive model to represent a collection of spectrograms. In this approach, we identify repeated bioacoustics patterns, e.g., bird syllables, as words and represent new spectrograms using these words. Moreover, we propose a supervised dictionary learning approach in the multiple-label setting to support multi-label classification of unlabeled spectrograms. Our approach relies on a random projection for reduced computational complexity. As a consequence, the non-negativity requirement on the dictionary words is relaxed. Furthermore, the proposed approach is well-suited for a collection of discontinuous spectrograms. We evaluate our approach on synthetic examples and on two real datasets consisting of multiple birds audio recordings. Bird syllable dictionary learning from a real-world dataset is demonstrated. Additionally, we successfully apply the approach to spectrogram denoising and species classification.  相似文献   
107.
Motivated by applications in large storage systems, we initiate the study of incremental deterministic public-key encryption. Deterministic public-key encryption, introduced by Bellare, Boldyreva, and O’Neill (CRYPTO ’07), provides an alternative to randomized public-key encryption in various scenarios where the latter exhibits inherent drawbacks. A deterministic encryption algorithm, however, cannot satisfy any meaningful notion of security for low-entropy plaintexts distributions, but Bellare et al. demonstrated that a strong notion of security can in fact be realized for relatively high-entropy plaintext distributions. In order to achieve a meaningful level of security, a deterministic encryption algorithm should be typically used for encrypting rather long plaintexts for ensuring a sufficient amount of entropy. This requirement may be at odds with efficiency constraints, such as communication complexity and computation complexity in the presence of small updates. Thus, a highly desirable property of deterministic encryption algorithms is incrementality: Small changes in the plaintext translate into small changes in the corresponding ciphertext. We present a framework for modeling the incrementality of deterministic public-key encryption. Our framework extends the study of the incrementality of cryptography primitives initiated by Bellare, Goldreich and Goldwasser (CRYPTO ’94). Within our framework, we propose two schemes, which we prove to enjoy an optimal tradeoff between their security and incrementality up to lower-order factors. Our first scheme is a generic method which can be based on any deterministic public-key encryption scheme, and, in particular, can be instantiated with any semantically secure (randomized) public-key encryption scheme in the random-oracle model. Our second scheme is based on the Decisional Diffie–Hellman assumption in the standard model. The approach underpinning our schemes is inspired by the fundamental “sample-then-extract” technique due to Nisan and Zuckerman (JCSS ’96) and refined by Vadhan (J. Cryptology ’04), and by the closely related notion of “locally computable extractors” due to Vadhan. Most notably, whereas Vadhan used such extractors to construct private-key encryption schemes in the bounded-storage model, we show that techniques along these lines can also be used to construct incremental public-key encryption schemes.  相似文献   
108.
Functional encryption supports restricted decryption keys that allow users to learn specific functions of the encrypted messages. Although the vast majority of research on functional encryption has so far focused on the privacy of the encrypted messages, in many realistic scenarios it is crucial to offer privacy also for the functions for which decryption keys are provided. Whereas function privacy is inherently limited in the public-key setting, in the private-key setting it has a tremendous potential. Specifically, one can hope to construct schemes where encryptions of messages \(\mathsf{m}_1, \ldots , \mathsf{m}_T\) together with decryption keys corresponding to functions \(f_1, \ldots , f_T\), reveal essentially no information other than the values \(\{ f_i(\mathsf{m}_j)\}_{i,j\in [T]}\). Despite its great potential, the known function-private private-key schemes either support rather limited families of functions (such as inner products) or offer somewhat weak notions of function privacy. We present a generic transformation that yields a function-private functional encryption scheme, starting with any non-function-private scheme for a sufficiently rich function class. Our transformation preserves the message privacy of the underlying scheme and can be instantiated using a variety of existing schemes. Plugging in known constructions of functional encryption schemes, we obtain function-private schemes based either on the learning with errors assumption, on obfuscation assumptions, on simple multilinear-maps assumptions, and even on the existence of any one-way function (offering various trade-offs between security and efficiency).  相似文献   
109.
A signature scheme is fully leakage resilient (Katz and Vaikuntanathan, ASIACRYPT’09) if it is existentially unforgeable under an adaptive chosen-message attack even in a setting where an adversary may obtain bounded (yet arbitrary) leakage information on all intermediate values that are used throughout the lifetime of the system. This is a strong and meaningful notion of security that captures a wide range of side-channel attacks. One of the main challenges in constructing fully leakage-resilient signature schemes is dealing with leakage that may depend on the random bits used by the signing algorithm, and constructions of such schemes are known only in the random-oracle model. Moreover, even in the random-oracle model, known schemes are only resilient to leakage of less than half the length of their signing key. In this paper we construct the first fully leakage-resilient signature schemes without random oracles. We present a scheme that is resilient to any leakage of length (1?o(1))L bits, where L is the length of the signing key. Our approach relies on generic cryptographic primitives, and at the same time admits rather efficient instantiations based on specific number-theoretic assumptions. In addition, we show that our approach extends to the continual-leakage model, recently introduced by Dodis, Haralambiev, Lopez-Alt and Wichs (FOCS’10), and by Brakerski, Tauman Kalai, Katz and Vaikuntanathan (FOCS’10). In this model the signing key is allowed to be refreshed, while its corresponding verification key remains fixed, and the amount of leakage is assumed to be bounded only in between any two successive key refreshes.  相似文献   
110.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号