首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   204篇
  免费   7篇
  国内免费   2篇
电工技术   5篇
化学工业   33篇
机械仪表   3篇
建筑科学   1篇
能源动力   11篇
轻工业   12篇
无线电   21篇
一般工业技术   24篇
冶金工业   21篇
自动化技术   82篇
  2023年   6篇
  2021年   2篇
  2020年   3篇
  2019年   6篇
  2018年   7篇
  2017年   3篇
  2016年   3篇
  2015年   4篇
  2014年   11篇
  2013年   17篇
  2012年   8篇
  2011年   6篇
  2010年   11篇
  2009年   7篇
  2008年   6篇
  2007年   9篇
  2006年   10篇
  2005年   10篇
  2004年   6篇
  2003年   10篇
  2002年   9篇
  2001年   5篇
  2000年   1篇
  1999年   3篇
  1998年   6篇
  1997年   9篇
  1996年   4篇
  1995年   3篇
  1994年   2篇
  1993年   7篇
  1992年   1篇
  1991年   1篇
  1990年   1篇
  1987年   1篇
  1986年   1篇
  1985年   3篇
  1984年   1篇
  1983年   2篇
  1982年   1篇
  1981年   1篇
  1980年   3篇
  1979年   1篇
  1976年   2篇
排序方式: 共有213条查询结果,搜索用时 15 毫秒
1.
Let Λ be a finite plaintext alphabet and V be a cypher alphabet with the same cardinality as Λ. In all one-to-one substitution cyphers, there exists the property that each element in V maps onto exactly one element in Λ and vice versa. This mapping of V onto Λ is represented by a function T*, which maps any vV onto some λ∈Λ (i.e., T*(v)=λ). The problem of learning the mapping of T* (or its inverse (T *)-1) by processing a sequence of cypher text is discussed. The fastest reported method to achieve this is a relaxation scheme that utilizes the statistical information contained in the unigrams and trigrams of the plaintext language. A new learning automaton solution to the problem called the cypher learning automaton (CLA) is given. The proposed scheme is fast, and the advantages of the scheme in terms of time and space requirements over the relaxation method have been listed. Simulation results comparing both cypher-breaking techniques are presented  相似文献   
2.
3.
This paper presents a brief discussion on the development of electrical grade paper/pressboard for transformer use from the raw materials, improvements made, and particularly the use of thermal upgrading agents to extend the useful life of transformers.  相似文献   
4.
An important part of network analysis is understanding community structures like topological clusters and attribute‐based groups. Standard approaches for showing communities using colour, shape, rectangular bounding boxes, convex hulls or force‐directed layout algorithms remain valuable, however our Group‐in‐a‐Box meta‐layouts add a fresh strategy for presenting community membership, internal structure and inter‐cluster relationships. This paper extends the basic Group‐in‐a‐Box meta‐layout, which uses a Treemap substrate of rectangular regions whose size is proportional to community size. When there are numerous inter‐community relationships, the proposed extensions help users view them more clearly: (1) the Croissant–Doughnut meta‐layout applies empirically determined rules for box arrangement to improve space utilization while still showing inter‐community relationships, and (2) the Force‐Directed layout arranges community boxes based on their aggregate ties at the cost of additional space. Our free and open source reference implementation in NodeXL includes heuristics to choose what we have found to be the preferable Group‐in‐a‐Box meta‐layout to show networks with varying numbers or sizes of communities. Case study examples, a pilot comparative user preference study (nine participants), and a readability measure‐based evaluation of 309 Twitter networks demonstrate the utility of the proposed meta‐layouts.  相似文献   
5.
The effects of the blend ratio, reactive compatibilization, and dynamic vulcanization on the dynamic mechanical properties of high‐density polyethylene (HDPE)/ethylene vinyl acetate (EVA) blends have been analyzed at different temperatures. The storage modulus of the blend decreases with an increase in the EVA content. The loss factor curve shows two peaks, corresponding to the transitions of HDPE and EVA, indicating the incompatibility of the blend system. Attempts have been made to correlate the observed viscoelastic properties of the blends with the blend morphology. Various composite models have been used to predict the dynamic mechanical data. The experimental values are close to those of the Halpin–Tsai model above 50 wt % EVA and close to those of the Coran model up to 50 wt % EVA in the blend. For the Takayanagi model, the theoretical value is in good agreement with the experimental value for a 70/30 HDPE/EVA blend. The area under the loss modulus/temperature curve (LA) has been analyzed with the integration method from the experimental curve and has been compared with that obtained from group contribution analysis. The LA values calculated with group contribution analysis are lower than those calculated with the integration method. The addition of a maleic‐modified polyethylene compatibilizer increases the storage modulus, loss modulus, and loss factor values of the system, and this is due to the finer dispersion of the EVA domains in the HDPE matrix upon compatibilization. For 70/30 and 50/50 blends, the addition of a maleic‐modified polyethylene compatibilizer shifts the relaxation temperature of both HDPE and EVA to a lower temperature, and this indicates increased interdiffusion of the two phases at the interface upon compatibilization. However, for a 30/70 HDPE/EVA blend, the addition of a compatibilizer does not change the relaxation temperature, and this may be due to the cocontinuous morphology of the blends. The dynamic vulcanization of the EVA phase with dicumyl peroxide results in an increase in both the storage and loss moduli of the blends. A significant increase in the relaxation temperature of EVA and a broadening of the relaxation peaks occur during dynamic vulcanization, and this indicates the increased interaction between the two phases. © 2003 Wiley Periodicals, Inc. J Appl Polym Sci 87: 2083–2099, 2003  相似文献   
6.
We consider the problem of polling web pages as a strategy for monitoring the world wide web. The problem consists of repeatedly polling a selection of web pages so that changes that occur over time are detected. In particular, we consider the case where we are constrained to poll a maximum number of web pages per unit of time, and this constraint is typically dictated by the governing communication bandwidth, and by the speed limitations associated with the processing. Since only a fraction of the web pages can be polled within a given unit of time, the issue at stake is one of determining which web pages are to be polled, and we attempt to do it in a manner that maximizes the number of changes detected. We solve the problem by first modelling it as a stochastic nonlinear fractional knapsack problem. We then present an online learning automata (LA) system, namely, the hierarchy of twofold resource allocation automata (H-TRAA), whose primitive component is a twofold resource allocation automaton (TRAA). Both the TRAA and the H-TRAA have been proven to be asymptotically optimal. Finally, we demonstrate empirically that the H-TRAA provides orders of magnitude faster convergence compared to the learning automata knapsack game (LAKG) which represents the state-of-the-art for this problem. Further, in contrast to the LAKG, the H-TRAA scales sub-linearly. Based on these results, we believe that the H-TRAA has also tremendous potential to handle demanding real-world applications, particularly those which deal with the world wide web.  相似文献   
7.
CAPTCHAs (completely automated public Turing test to tell computers and humans apart) are in common use today as a method for performing automated human verification online. The most popular type of CAPTCHA is the text recognition variety. However, many of the existing printed text CAPTCHAs have been broken by web-bots and are hence vulnerable to attack. We present an approach to use human-like handwriting for designing CAPTCHAs. A synthetic handwriting generation method is presented, where the generated textlines need to be as close as possible to human handwriting without being writer-specific. Such handwritten CAPTCHAs exploit the differential in handwriting reading proficiency between humans and machines. Test results show that when the generated textlines are further obfuscated with a set of deformations, machine recognition rates decrease considerably, compared to prior work, while human recognition rates remain the same.  相似文献   
8.
The fundamental phenomenon that has been used to enhance the convergence speed of learning automata (LA) is that of incorporating the running maximum likelihood (ML) estimates of the action reward probabilities into the probability updating rules for selecting the actions. The frontiers of this field have been recently expanded by replacing the ML estimates with their corresponding Bayesian counterparts that incorporate the properties of the conjugate priors. These constitute the Bayesian pursuit algorithm (BPA), and the discretized Bayesian pursuit algorithm. Although these algorithms have been designed and efficiently implemented, and are, arguably, the fastest and most accurate LA reported in the literature, the proofs of their \(\epsilon\)-optimal convergence has been unsolved. This is precisely the intent of this paper. In this paper, we present a single unifying analysis by which the proofs of both the continuous and discretized schemes are proven. We emphasize that unlike the ML-based pursuit schemes, the Bayesian schemes have to not only consider the estimates themselves but also the distributional forms of their conjugate posteriors and their higher order moments—all of which render the proofs to be particularly challenging. As far as we know, apart from the results themselves, the methodologies of this proof have been unreported in the literature—they are both pioneering and novel.  相似文献   
9.
This paper considers the nonlinear fractional knapsack problem and demonstrates how its solution can be effectively applied to two resource allocation problems dealing with the World Wide Web. The novel solution involves a "team" of deterministic learning automata (LA). The first real-life problem relates to resource allocation in web monitoring so as to "optimize" information discovery when the polling capacity is constrained. The disadvantages of the currently reported solutions are explained in this paper. The second problem concerns allocating limited sampling resources in a "real-time" manner with the purpose of estimating multiple binomial proportions. This is the scenario encountered when the user has to evaluate multiple web sites by accessing a limited number of web pages, and the proportions of interest are the fraction of each web site that is successfully validated by an HTML validator. Using the general LA paradigm to tackle both of the real-life problems, the proposed scheme improves a current solution in an online manner through a series of informed guesses that move toward the optimal solution. At the heart of the scheme, a team of deterministic LA performs a controlled random walk on a discretized solution space. Comprehensive experimental results demonstrate that the discretization resolution determines the precision of the scheme, and that for a given precision, the current solution (to both problems) is consistently improved until a nearly optimal solution is found--even for switching environments. Thus, the scheme, while being novel to the entire field of LA, also efficiently handles a class of resource allocation problems previously not addressed in the literature.  相似文献   
10.
This paper deals with the problem of estimating a transmitted string X * by processing the corresponding string Y, which is a noisy version of X *. We assume that Y contains substitution, insertion, and deletion errors, and that X * is an element of a finite (but possibly, large) dictionary, H. The best estimate X + of X *, is defined as that element of H which minimizes the generalized Levenshtein distance D(X, Y) between X and Y such that the total number of errors is not more than K, for all XH. The trie is a data structure that offers search costs that are independent of the document size. Tries also combine prefixes together, and so by using tries in approximate string matching we can utilize the information obtained in the process of evaluating any one D(X i , Y), to compute any other D(X j , Y), where X i and X j share a common prefix. In the artificial intelligence (AI) domain, branch and bound (BB) schemes are used when we want to prune paths that have costs above a certain threshold. These techniques have been applied to prune, for example, game trees. In this paper, we present a new BB pruning strategy that can be applied to dictionary-based approximate string matching when the dictionary is stored as a trie. The new strategy attempts to look ahead at each node, c, before moving further, by merely evaluating a certain local criterion at c. The search algorithm according to this pruning strategy will not traverse inside the subtrie(c) unless there is a “hope” of determining a suitable string in it. In other words, as opposed to the reported trie-based methods (Kashyap and Oommen in Inf Sci 23(2):123–142, 1981; Shang and Merrettal in IEEE Trans Knowledge Data Eng 8(4):540–547, 1996), the pruning is done a priori before even embarking on the edit distance computations. The new strategy depends highly on the variance of the lengths of the strings in H. It combines the advantages of partitioning the dictionary according to the string lengths, and the advantages gleaned by representing H using the trie data structure. The results demonstrate a marked improvement (up to 30% when costs are of a 0/1 form, and up to 47% when costs are general) with respect to the number of operations needed on three benchmark dictionaries.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号