全文获取类型
收费全文 | 40146篇 |
免费 | 1418篇 |
国内免费 | 79篇 |
专业分类
电工技术 | 447篇 |
综合类 | 37篇 |
化学工业 | 8560篇 |
金属工艺 | 1029篇 |
机械仪表 | 762篇 |
建筑科学 | 2100篇 |
矿业工程 | 259篇 |
能源动力 | 1254篇 |
轻工业 | 3415篇 |
水利工程 | 407篇 |
石油天然气 | 232篇 |
武器工业 | 8篇 |
无线电 | 2679篇 |
一般工业技术 | 6694篇 |
冶金工业 | 7453篇 |
原子能技术 | 362篇 |
自动化技术 | 5945篇 |
出版年
2022年 | 304篇 |
2021年 | 545篇 |
2020年 | 478篇 |
2019年 | 531篇 |
2018年 | 660篇 |
2017年 | 697篇 |
2016年 | 763篇 |
2015年 | 624篇 |
2014年 | 991篇 |
2013年 | 2720篇 |
2012年 | 1668篇 |
2011年 | 2106篇 |
2010年 | 1520篇 |
2009年 | 1617篇 |
2008年 | 1915篇 |
2007年 | 1874篇 |
2006年 | 1604篇 |
2005年 | 1454篇 |
2004年 | 1320篇 |
2003年 | 1253篇 |
2002年 | 1224篇 |
2001年 | 748篇 |
2000年 | 712篇 |
1999年 | 686篇 |
1998年 | 716篇 |
1997年 | 600篇 |
1996年 | 697篇 |
1995年 | 632篇 |
1994年 | 637篇 |
1993年 | 611篇 |
1992年 | 540篇 |
1991年 | 369篇 |
1990年 | 485篇 |
1989年 | 483篇 |
1988年 | 423篇 |
1987年 | 448篇 |
1986年 | 415篇 |
1985年 | 555篇 |
1984年 | 525篇 |
1983年 | 469篇 |
1982年 | 488篇 |
1981年 | 484篇 |
1980年 | 376篇 |
1979年 | 385篇 |
1978年 | 354篇 |
1977年 | 324篇 |
1976年 | 278篇 |
1975年 | 329篇 |
1974年 | 262篇 |
1973年 | 292篇 |
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
991.
John?McCarthyEmail author Peter?Wright Jayne?Wallace Andy?Dearden 《Personal and Ubiquitous Computing》2006,10(6):369-378
Improving user experience is becoming something of a rallying call in human–computer interaction but experience is not a unitary thing. There are varieties of experiences, good and bad, and we need to characterise these varieties if we are to improve user experience. In this paper we argue that enchantment is a useful concept to facilitate closer relationships between people and technology. But enchantment is a complex concept in need of some clarification. So we explore how enchantment has been used in the discussions of technology and examine experiences of film and cell phones to see how enchantment with technology is possible. Based on these cases, we identify the sensibilities that help designers design for enchantment, including the specific sensuousness of a thing, senses of play, paradox and openness, and the potential for transformation. We use these to analyse digital jewellery in order to suggest how it can be made more enchanting. We conclude by relating enchantment to varieties of experience.An earlier version of this paper was presented at Chi’2004 Fringe. 相似文献
992.
This paper deals with the problem of estimating a transmitted string X * by processing the corresponding string Y, which is a noisy version of X *. We assume that Y contains substitution, insertion, and deletion errors, and that X * is an element of a finite (but possibly, large) dictionary, H. The best estimate X + of X *, is defined as that element of H which minimizes the generalized Levenshtein distance D(X, Y) between X and Y such that the total number of errors is not more than K, for all X ∈H. The trie is a data structure that offers search costs that are independent of the document size. Tries also combine prefixes together, and so by using tries in approximate string matching we can utilize the information obtained in the process of evaluating any one D(X i , Y), to compute any other D(X j , Y), where X i and X j share a common prefix. In the artificial intelligence (AI) domain, branch and bound (BB) schemes are used when we want to prune paths that have costs above a certain threshold. These techniques have been applied to prune, for example, game trees. In this paper, we present a new BB pruning strategy that can be applied to dictionary-based approximate string matching when the dictionary is stored as a trie. The new strategy attempts to look ahead at each node, c, before moving further, by merely evaluating a certain local criterion at c. The search algorithm according to this pruning strategy will not traverse inside the subtrie(c) unless there is a “hope” of determining a suitable string in it. In other words, as opposed to the reported trie-based methods (Kashyap and Oommen in Inf Sci 23(2):123–142, 1981; Shang and Merrettal in IEEE Trans Knowledge Data Eng 8(4):540–547, 1996), the pruning is done a priori before even embarking on the edit distance computations. The new strategy depends highly on the variance of the lengths of the strings in H. It combines the advantages of partitioning the dictionary according to the string lengths, and the advantages gleaned by representing H using the trie data structure. The results demonstrate a marked improvement (up to 30% when costs are of a 0/1 form, and up to 47% when costs are general) with respect to the number of operations needed on three benchmark dictionaries. 相似文献
993.
We consider the issue of exploiting the structural form of Esterel programs to partition the algorithmic RSS (reachable state space) fix-point construction used in model-checking techniques. The basic idea sounds utterly simple, as seen on the case of sequential composition: in P; Q, first compute entirely the states reached in P, and then only carry on to Q, each time using only the relevant transition relation part. Here a brute-force symbolic breadth-first search would have mixed the exploration of P and Q instead, in case P had different behaviors of various lengths, and that would result in irregular BBD representation of temporary state spaces, a major cause of complexity in symbolic model-checking.Difficulties appear in our decomposition approach when scheduling the different transition parts in presence of parallelism and local signal exchanges. Program blocks (or “Macro-states”) put in parallel can be synchronized in various ways, due to dynamic behaviors, and considering all possibilities may lead to an excessive division complexity. The goal is here to find a satisfactory trade-off between compositional and global approaches. Concretely we use some of the features of the TiGeR BDD library, and heuristic orderings between internal signals, to have the transition relation progress through the program behaviors to get the same effect as a global RSS computation, but with much more localized transition applications. We provide concrete benchmarks showing the usefulness of the approach. 相似文献
994.
Melek Z Mayerich D Yuksel C Keyser J 《IEEE transactions on visualization and computer graphics》2006,12(5):1165-1172
Thread-like structures are becoming more common in modern volumetric data sets as our ability to image vascular and neural tissue at higher resolutions improves. The thread-like structures of neurons and micro-vessels pose a unique problem in visualization since they tend to be densely packed in small volumes of tissue. This makes it difficult for an observer to interpret useful patterns from the data or trace individual fibers. In this paper we describe several methods for dealing with large amounts of thread-like data, such as data sets collected using Knife-Edge Scanning Microscopy (KESM) and Serial Block-Face Scanning Electron Microscopy (SBF-SEM). These methods allow us to collect volumetric data from embedded samples of whole-brain tissue. The neuronal and microvascular data that we acquire consists of thin, branching structures extending over very large regions. Traditional visualization schemes are not sufficient to make sense of the large, dense, complex structures encountered. In this paper, we address three methods to allow a user to explore a fiber network effectively. We describe interactive techniques for rendering large sets of neurons using self-orienting surfaces implemented on the GPU. We also present techniques for rendering fiber networks in a way that provides useful information about flow and orientation. Third, a global illumination framework is used to create high-quality visualizations that emphasize the underlying fiber structure. Implementation details, performance, and advantages and disadvantages of each approach are discussed. 相似文献
995.
Luis Rueda B John Oommen 《IEEE transactions on systems, man, and cybernetics. Part B, Cybernetics》2006,36(5):1196-1200
This correspondence shows that learning automata techniques, which have been useful in developing weak estimators, can be applied to data compression applications in which the data distributions are nonstationary. The adaptive coding scheme utilizes stochastic learning-based weak estimation techniques to adaptively update the probabilities of the source symbols, and this is done without resorting to either maximum likelihood, Bayesian, or sliding-window methods. The authors have incorporated the estimator in the adaptive Fano coding scheme and in an adaptive entropy-based scheme that "resembles" the well-known arithmetic coding. The empirical results obtained for both of these adaptive methods are obtained on real-life files that possess a fair degree of nonstationarity. From these results, it can be seen that the proposed schemes compress nearly 10% more than their respective adaptive methods that use maximum-likelihood estimator-based estimates. 相似文献
996.
997.
Information retrieval from the World Wide Web through the use of search engines is known to be unable to capture effectively
the information needs of users. The approach taken in this paper is to add intelligence to information retrieval from the
World Wide Web, by the modeling of users to improve the interaction between the user and information retrieval systems. In
other words, to improve the performance of the user in retrieving information from the information source. To effect such
an improvement, it is necessary that any retrieval system should somehow make inferences concerning the information the user
might want. The system then can aid the user, for instance by giving suggestions or by adapting any query based on predictions
furnished by the model. So, by a combination of user modeling and fuzzy logic a prototype system has been developed (the Fuzzy
Modeling Query Assistant (FMQA)) which modifies a user's query based on a fuzzy user model. The FMQA was tested via a user
study which clearly indicated that, for the limited domain chosen, the modified queries are better than those that are left
unmodified.
Received 10 November 1998 / Revised 14 June 2000 / Accepted in revised form 25 September 2000 相似文献
998.
SAFE: An Efficient Feature Extraction Technique 总被引:1,自引:0,他引:1
Ujjwal Maulik Sanghamitra Bandyopadhyay John C. Trinder 《Knowledge and Information Systems》2001,3(3):374-387
This paper proposes an efficient window-based semi-automatic feature extraction technique which uses simulated annealing
for minimizing the energy of an active contour within a specified image region. The energy is computed based on a chamfer
image, in which pixel values are a function of distance to image edges. A user places a number of control points close to
the feature of interest. B-spline fitted to these points provides an initial approximation of the contour. A window containing
both the initial contour and the feature of interest is considered. The contour with minimum energy inside the window provides
the final delineation. Comparison of the performance of the proposed algorithm with traditional snake, a popular feature extraction technique based on energy minimization, demonstrates the superiority of the SAFE technique.
Received 18 August 1999 / Revised 25 October 2000 / Accepted in revised form 8 December 2000 相似文献
999.
We have discovered a class of fractal functions that are differentiable. Fractal interpolation functions have been used for
over a decade to generate rough functions passing through a set of given points. The integral of a fractal interpolation function
remains a fractal interpolation function, and this new fractal interpolation function is differentiable. Tensor products of
pairs of these fractal functions form fractal surfaces with a well-defined tangent plane. We use this surface normal to shade
fractal surfaces, and demonstrate its use with renderings of fractal mirrors. 相似文献
1000.
Hoffren Anna-Marja; Saloheimo Markku; Thomas Pamela; Overington John P.; Johnson Mark S.; Knowles Jonathan K.C.; Blundell Tom L. 《Protein engineering, design & selection : PEDS》1993,6(2):177-182
A model of the lignin peroxidase LIII of Phlebia radiata wasconstructed on the basis of the structure of cytochrome c peroxidase(CCP). Because of the low percentage of amino acid identitybetween the CCP and the lignin peroxidase LIII of Phlebia radiata,alignment of the sequences was based on the generation of atemplate from a knowledge of the 3-D structure of CCP and consensussequences of lignin peroxidases. This approach gave an alignmentin which all the insertions in the lignin peroxidase were placedat loop regions of CCP, with a 21.1% identity for these twoproteins. The model was constructed using this alignment andthe computer program COMPOSER, which assembles the model asa series of rigid fragments derived from CCP and other proteins.Manual intervention was required for some of the longer loopregions. The -helices forming the structural framework, andespecially the haem environment of CCP, are conserved in theLIII model and the core is close packed without holes. A possiblesite of the substrate oxidation at the haem edge of LIII isdiscussed. 相似文献