首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   3482篇
  免费   201篇
  国内免费   6篇
电工技术   51篇
综合类   3篇
化学工业   968篇
金属工艺   76篇
机械仪表   86篇
建筑科学   138篇
矿业工程   6篇
能源动力   154篇
轻工业   265篇
水利工程   28篇
石油天然气   3篇
无线电   353篇
一般工业技术   697篇
冶金工业   177篇
原子能技术   32篇
自动化技术   652篇
  2023年   25篇
  2022年   69篇
  2021年   104篇
  2020年   55篇
  2019年   76篇
  2018年   86篇
  2017年   91篇
  2016年   120篇
  2015年   104篇
  2014年   144篇
  2013年   241篇
  2012年   220篇
  2011年   281篇
  2010年   182篇
  2009年   217篇
  2008年   226篇
  2007年   168篇
  2006年   179篇
  2005年   124篇
  2004年   107篇
  2003年   103篇
  2002年   120篇
  2001年   55篇
  2000年   51篇
  1999年   49篇
  1998年   66篇
  1997年   70篇
  1996年   40篇
  1995年   32篇
  1994年   36篇
  1993年   26篇
  1992年   17篇
  1991年   15篇
  1990年   20篇
  1989年   17篇
  1988年   9篇
  1987年   8篇
  1986年   13篇
  1985年   15篇
  1984年   17篇
  1983年   9篇
  1982年   10篇
  1981年   14篇
  1980年   7篇
  1979年   7篇
  1978年   8篇
  1977年   5篇
  1976年   9篇
  1975年   5篇
  1972年   4篇
排序方式: 共有3689条查询结果,搜索用时 359 毫秒
91.
This work is dedicated to develop an algorithm for the visual quality recognition of nonwoven materials, in which image analysis and neural network are involved in feature extraction and pattern recognition stage, respectively. During the feature extraction stage, each image is decomposed into four levels using the 9-7 bi-orthogonal wavelet base. Then the wavelet coefficients in each subband are independently modeled by the generalized Gaussian density (GGD) model to calculate the scale and shape parameters with maximum likelihood (ML) estimator as texture features. While for the recognition stage, the robust Bayesian neural network is employed to classify the 625 nonwoven samples into five visual quality grades, i.e., 125 samples for each grade. Finally, we carry out the outlier detection of the training set using the outlier probability and select the most suitable model structure and parameters from 40 Bayesian neural networks using the Occam's razor. When 18 relevant textural features are extracted for each sample based on the GGD model, the average recognition accuracy of the test set arranges from 88% to 98.4% according to the different number of the hidden neurons in the Bayesian neural network.  相似文献   
92.
A methodology is proposed to infer the altitude of aerosol plumes over the ocean from reflectance ratio measurements in the O2 absorption A-band (759 to 770 nm). The reflectance ratio is defined as the ratio of the reflectance in a first spectral band, strongly attenuated by O2 absorption, and the reflectance in a second spectral band, minimally attenuated. For a given surface reflectance, simple relations are established between the reflectance ratio and the altitude of an aerosol layer, as a function of atmospheric conditions and the geometry of observation. The expected accuracy for various aerosol loadings and models is first quantified using an accurate, high spectral resolution, radiative transfer model that fully accounts for interactions between scattering and absorption. The method is developed for POLDER and MERIS, satellite sensors with adequate spectral characteristics. The simulations show that the method is only accurate over dark surfaces when aerosol optical thickness at 765 nm is relatively large (> 0.3). In this case, the expected accuracy is on the order of ± 0.5 km or ± 0.2 km for POLDER or MERIS respectively. More accurate estimates are obtained with MERIS, since in this case the spectral reflectance ratio is more sensitive to aerosol altitude. However, a precise spectral calibration is needed for MERIS. The methodology is applied to MERIS and POLDER imagery acquired over marine surfaces. The estimated aerosol altitude is compared with in situ lidar profiles of backscattering coefficient measured during the AOPEX-2004 experiment for MERIS, or obtained with the space-borne lidar CALIOP for POLDER. The retrieved altitudes agree with lidar measurements in a manner consistent with theory. These comparisons demonstrate the potential of the differential absorption methodology for obtaining information on aerosol altitude over dark surfaces.  相似文献   
93.
Virtual execution environments, such as the Java virtual machine, promote platform‐independent software development. However, when it comes to analyzing algorithm complexity and performance bottlenecks, available tools focus on platform‐specific metrics, such as the CPU time consumption on a particular system. Other drawbacks of many prevailing profiling tools are high overhead, significant measurement perturbation, as well as reduced portability of profiling tools, which are often implemented in platform‐dependent native code. This article presents a novel profiling approach, which is entirely based on program transformation techniques, in order to build a profiling data structure that provides calling‐context‐sensitive program execution statistics. We explore the use of platform‐independent profiling metrics in order to make the instrumentation entirely portable and to generate reproducible profiles. We implemented these ideas within a Java‐based profiling tool called JP. A significant novelty is that this tool achieves complete bytecode coverage by statically instrumenting the core runtime libraries and dynamically instrumenting the rest of the code. JP provides a small and flexible API to write customized profiling agents in pure Java, which are periodically activated to process the collected profiling information. Performance measurements point out that, despite the presence of dynamic instrumentation, JP causes significantly less overhead than a prevailing tool for the profiling of Java code. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   
94.
Randomized algorithms are widely used for finding efficiently approximated solutions to complex problems, for instance primality testing and for obtaining good average behavior. Proving properties of such algorithms requires subtle reasoning both on algorithmic and probabilistic aspects of programs. Thus, providing tools for the mechanization of reasoning is an important issue. This paper presents a new method for proving properties of randomized algorithms in a proof assistant based on higher-order logic. It is based on the monadic interpretation of randomized programs as probabilistic distributions (Giry, Ramsey and Pfeffer). It does not require the definition of an operational semantics for the language nor the development of a complex formalization of measure theory. Instead it uses functional and algebraic properties of unit interval. Using this model, we show the validity of general rules for estimating the probability for a randomized algorithm to satisfy specified properties. This approach addresses only discrete distributions and gives rules for analyzing general recursive functions.We apply this theory to the formal proof of a program implementing a Bernoulli distribution from a coin flip and to the (partial) termination of several programs. All the theories and results presented in this paper have been fully formalized and proved in the Coq proof assistant.  相似文献   
95.
Restoration of the photographs damaged by the camera shake is a challenging task that manifested increasing attention in the recent period. Despite of the important progress of the blind deconvolution techniques, due to the ill-posed nature of the problem, the finest details of the kernel blur cannot be recovered entirely. Moreover, the additional constraints and prior assumptions make these approaches to be relative limited.
In this paper we introduce a novel technique that removes the undesired blur artifacts from photographs taken by hand-held digital cameras. Our approach is based on the observation that in general several consecutive photographs taken by the users share image regions that project the same scene content. Therefore, we took advantage of additional sharp photographs of the same scene. Based on several invariant local feature points, filtered from the given blurred/non-blurred images, our approach matches the keypoints and estimates the blur kernel using additional statistical constraints.
We also present a simple deconvolution technique that preserves edges while minimizing the ringing artifacts in the restored latent image. The experimental results prove that our technique is able to infer accurately the blur kernel while reducing significantly the artifacts of the spoilt images.  相似文献   
96.
This paper presents and discusses a blocked parallel implementation of bi- and three-dimensional versions of the Lattice Boltzmann Method. This method is used to represent and simulate fluid flows following a mesoscopic approach. Most traditional parallel implementations use simple data distribution strategies to parallelize the operations on the regular fluid data set. However, it is well known that block partitioning is usually better. Such a parallel implementation is discussed and its communication cost is established. Fluid flows simulations crossing a cavity have also been used as a real-world case study to evaluate our implementation. The presented results with our blocked implementation achieve a performance up to 31% better than non-blocked versions, for some data distributions. Thus, this work shows that blocked, parallel implementations can be efficiently used to reduce the parallel execution time of the method.  相似文献   
97.
Several authors have hailed intuition as one of the defining features of expertise. In particular, while disagreeing on almost anything that touches on human cognition and artificial intelligence, Hubert Dreyfus and Herbert Simon agreed on this point. However, the highly influential theories of intuition they proposed differed in major ways, especially with respect to the role given to search and as to whether intuition is holistic or analytic. Both theories suffer from empirical weaknesses. In this paper, we show how, with some additions, a recent theory of expert memory (the template theory) offers a coherent and wide-ranging explanation of intuition in expert behaviour. It is shown that the theory accounts for the key features of intuition: it explains the rapid onset of intuition and its perceptual nature, provides mechanisms for learning, incorporates processes showing how perception is linked to action and emotion, and how experts capture the entirety of a situation. In doing so, the new theory addresses the issues problematic for Dreyfus’s and Simon’s theories. Implications for research and practice are discussed.
Fernand GobetEmail:
  相似文献   
98.
Bytecode instrumentation is a widely used technique to implement aspect weaving and dynamic analyses in virtual machines such as the Java virtual machine. Aspect weavers and other instrumentations are usually developed independently and combining them often requires significant engineering effort, if at all possible. In this article, we present polymorphic bytecode instrumentation(PBI), a simple but effective technique that allows dynamic dispatch amongst several, possibly independent instrumentations. PBI enables complete bytecode coverage, that is, any method with a bytecode representation can be instrumented. We illustrate further benefits of PBI with three case studies. First, we describe how PBI can be used to implement a comprehensive profiler of inter‐procedural and intra‐procedural control flow. Second, we provide an implementation of execution levels for AspectJ, which avoids infinite regression and unwanted interference between aspects. Third, we present a framework for adaptive dynamic analysis, where the analysis to be performed can be changed at runtime by the user. We assess the overhead introduced by PBI and provide thorough performance evaluations of PBI in all three case studies. We show that pure Java profilers like JP2 can, thanks to PBI, produce accurate execution profiles by covering all code, including the core Java libraries. We then demonstrate that PBI‐based execution levels are much faster than control flow pointcuts to avoid interference between aspects and that their efficient integration in a practical aspect language is possible. Finally, we report that PBI enables adaptive dynamic analysis tools that are more reactive to user inputs than existing tools that rely on dynamic aspect‐oriented programming with runtime weaving. These experiments position PBI as a widely applicable and practical approach for combining bytecode instrumentations. © 2015 The Authors. Software: Practice and Experience Published by John Wiley & Sons Ltd.  相似文献   
99.
A 3D stereoscopic head‐up display using a tunable bandpass filter to perform left and right image spectral separation is presented. Using a single filter reduces the size and the cost of the head‐up display optical engine and enables each spectral band to be accurately tuned. Experiments performed on the first prototype demonstrate the ability to continuously tune the bandpass frequency on 30‐nm range while keeping a 20‐nm bandwidth. Such a system avoids the use of a bulky and costly rotating wheel and enables the use of holographic optical elements known to be wavelength selective.  相似文献   
100.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号