首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   5459篇
  免费   164篇
  国内免费   32篇
电工技术   58篇
综合类   20篇
化学工业   1075篇
金属工艺   356篇
机械仪表   84篇
建筑科学   150篇
矿业工程   26篇
能源动力   98篇
轻工业   1019篇
水利工程   36篇
石油天然气   11篇
无线电   340篇
一般工业技术   790篇
冶金工业   471篇
原子能技术   20篇
自动化技术   1101篇
  2023年   35篇
  2022年   71篇
  2021年   103篇
  2020年   72篇
  2019年   58篇
  2018年   171篇
  2017年   182篇
  2016年   225篇
  2015年   190篇
  2014年   180篇
  2013年   266篇
  2012年   300篇
  2011年   833篇
  2010年   259篇
  2009年   242篇
  2008年   239篇
  2007年   203篇
  2006年   179篇
  2005年   134篇
  2004年   120篇
  2003年   102篇
  2002年   108篇
  2001年   58篇
  2000年   51篇
  1999年   66篇
  1998年   133篇
  1997年   88篇
  1996年   71篇
  1995年   37篇
  1994年   34篇
  1993年   35篇
  1992年   21篇
  1991年   18篇
  1981年   17篇
  1980年   19篇
  1976年   25篇
  1936年   29篇
  1935年   32篇
  1934年   21篇
  1933年   22篇
  1932年   16篇
  1931年   25篇
  1930年   24篇
  1929年   20篇
  1928年   46篇
  1927年   36篇
  1925年   23篇
  1915年   16篇
  1913年   49篇
  1912年   28篇
排序方式: 共有5655条查询结果,搜索用时 15 毫秒
101.
Regarding the development of nanoparticles for polymer matrix composites the particle/agglomerate size and particle/agglomerate distribution in the composites, respectively, is often crucial. This is exemplarily shown for, e.g. optical applications with measurements of refractive index and transmittance. Classical blending techniques, where nanoparticles are dispersed in polymers or resins, are compared to a combination of a special gas-phase synthesis method with subsequent in-situ deposition of nanoparticles in high-boiling liquids. The particles/agglomerates were characterized regarding particle size and particle size distribution using transmission electron microscopy and dynamic light scattering. Additionally, important material properties like mechanical properties, relevant for application, or like viscosity, relevant for processing, are determined. It is shown, that with in-situ dispersed nanoparticles synthesized in a microwave plasma process composites with finely dispersed particles/agglomerates are attainable.  相似文献   
102.
Abstract: Managing multiple ontologies is now a core question in most of the applications that require semantic interoperability. The semantic web is surely the most significant application of this report: the current challenge is not to design, develop and deploy domain ontologies but to define semantic correspondences among multiple ontologies covering overlapping domains. In this paper, we introduce a new approach of ontology matching named axiom-based ontology matching. As this approach is founded on the use of axioms, it is mainly dedicated to heavyweight ontologies, but it can also be applied to lightweight ontologies as a complementary approach to the current techniques based on the analysis of natural language expressions, instances and/or taxonomical structures of ontologies. This new matching paradigm is defined in the context of the conceptual graphs model, where the projection (i.e. the main operator for reasoning with conceptual graphs which corresponds to homomorphism of graphs) is used as a means to semantically match the concepts and the relations of two ontologies through the explicit representation of the axioms in terms of conceptual graphs. We also introduce an ontology of representation, called MetaOCGL, dedicated to the reasoning of heavyweight ontologies at the meta-level.  相似文献   
103.
104.
This paper proposes an approach to compute view-normalized body part trajectories of pedestrians walking on potentially non-linear paths. The proposed approach finds applications in gait modeling, gait biometrics, and in medical gait analysis. Our approach uses the 2D trajectories of both feet and the head extracted from the tracked silhouettes. On that basis, it computes the apparent walking (sagittal) planes for each detected gait half-cycle. A homography transformation is then computed for each walking plane to make it appear as if walking was observed from a fronto-parallel view. Finally, each homography is applied to head and feet trajectories over each corresponding gait half-cycle. View normalization makes head and feet trajectories appear as if seen from a fronto-parallel viewpoint, which is assumed to be optimal for gait modeling purposes. The proposed approach is fully automatic as it requires neither manual initialization nor camera calibration. An extensive experimental evaluation of the proposed approach confirms the validity of the normalization process.  相似文献   
105.
Of the very few practical implementations of program slicing algorithms, the majority deal with C/C++ programs. Yet, preprocessor-related issues have been marginally addressed by these slicers, despite the fact that ignoring (or only partially handling) these constructs may lead to serious inaccuracies in the slicing results and hence in the program analysis task being performed. Recently, an accurate slicing method for preprocessor-related constructs has been proposed, which-when combined with existing C/C++ language slicers-can provide more complete slices and hence a more successful analysis of programs written in one of these languages. In this paper, we present our approach which combines the two slicing methods and, via practical experiments, describe its benefits in terms of the completeness of the resulting slices.  相似文献   
106.
This article deals with a local improvement of domain decomposition methods for 2-dimensional elliptic problems for which either the geometry or the domain decomposition presents conical singularities. After explaining the main results of the theoretical analysis carried out in Chniti et al. (Calcolo 45, 2008), the numerical experiments presented in this article confirm the optimality properties of the new interface conditions.  相似文献   
107.
The bilateral filter is a nonlinear filter that smoothes a signal while preserving strong edges. It has demonstrated great effectiveness for a variety of problems in computer vision and computer graphics, and fast versions have been proposed. Unfortunately, little is known about the accuracy of such accelerations. In this paper, we propose a new signal-processing analysis of the bilateral filter which complements the recent studies that analyzed it as a PDE or as a robust statistical estimator. The key to our analysis is to express the filter in a higher-dimensional space where the signal intensity is added to the original domain dimensions. Importantly, this signal-processing perspective allows us to develop a novel bilateral filtering acceleration using downsampling in space and intensity. This affords a principled expression of accuracy in terms of bandwidth and sampling. The bilateral filter can be expressed as linear convolutions in this augmented space followed by two simple nonlinearities. This allows us to derive criteria for downsampling the key operations and achieving important acceleration of the bilateral filter. We show that, for the same running time, our method is more accurate than previous acceleration techniques. Typically, we are able to process a 2 megapixel image using our acceleration technique in less than a second, and have the result be visually similar to the exact computation that takes several tens of minutes. The acceleration is most effective with large spatial kernels. Furthermore, this approach extends naturally to color images and cross bilateral filtering.  相似文献   
108.
Applications related to game technology, law-enforcement, security, medicine or biometrics are becoming increasingly important, which, combined with the proliferation of three-dimensional (3D) scanning hardware, have made that 3D face recognition is now becoming a promising and feasible alternative to two-dimensional (2D) face methods. The main advantage of 3D data, when compared with traditional 2D approaches, is that it provides information that is invariant to rigid geometric transformations and to pose and illumination conditions. One key element for any 3D face recognition system is the modeling of the available scanned data. This paper presents new 3D models for facial surface representation and evaluates them using two matching approaches: one based on support vector machines and another one on principal component analysis (with a Euclidean classifier). Also, two types of environments were tested in order to check the robustness of the proposed models: a controlled environment with respect to facial conditions (i.e. expressions, face rotations, etc.) and a non-controlled one (presenting face rotations and pronounced facial expressions). The recognition rates obtained using reduced spatial resolution representations (a 77.86% for non-controlled environments and a 90.16% for controlled environments, respectively) show that the proposed models can be effectively used for practical face recognition applications.  相似文献   
109.
In this study, an optimization of the airfoil of a sailplane is carried out by a recently developed multi-objective genetic algorithm based on microevolution, containing crowding, range adaptation, knowledge-based reinitialization and ε-dominance. Its efficiency was tested on a set of test problems. The results are encouraging, suggesting that very small populations can be used effectively to solve real-world multi-objective optimization problems in many cases of interest.  相似文献   
110.
Bytecode instrumentation is a widely used technique to implement aspect weaving and dynamic analyses in virtual machines such as the Java virtual machine. Aspect weavers and other instrumentations are usually developed independently and combining them often requires significant engineering effort, if at all possible. In this article, we present polymorphic bytecode instrumentation(PBI), a simple but effective technique that allows dynamic dispatch amongst several, possibly independent instrumentations. PBI enables complete bytecode coverage, that is, any method with a bytecode representation can be instrumented. We illustrate further benefits of PBI with three case studies. First, we describe how PBI can be used to implement a comprehensive profiler of inter‐procedural and intra‐procedural control flow. Second, we provide an implementation of execution levels for AspectJ, which avoids infinite regression and unwanted interference between aspects. Third, we present a framework for adaptive dynamic analysis, where the analysis to be performed can be changed at runtime by the user. We assess the overhead introduced by PBI and provide thorough performance evaluations of PBI in all three case studies. We show that pure Java profilers like JP2 can, thanks to PBI, produce accurate execution profiles by covering all code, including the core Java libraries. We then demonstrate that PBI‐based execution levels are much faster than control flow pointcuts to avoid interference between aspects and that their efficient integration in a practical aspect language is possible. Finally, we report that PBI enables adaptive dynamic analysis tools that are more reactive to user inputs than existing tools that rely on dynamic aspect‐oriented programming with runtime weaving. These experiments position PBI as a widely applicable and practical approach for combining bytecode instrumentations. © 2015 The Authors. Software: Practice and Experience Published by John Wiley & Sons Ltd.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号