首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1777篇
  免费   115篇
  国内免费   4篇
电工技术   23篇
综合类   4篇
化学工业   417篇
金属工艺   21篇
机械仪表   19篇
建筑科学   102篇
矿业工程   3篇
能源动力   78篇
轻工业   177篇
水利工程   11篇
石油天然气   3篇
无线电   196篇
一般工业技术   351篇
冶金工业   90篇
原子能技术   13篇
自动化技术   388篇
  2024年   4篇
  2023年   22篇
  2022年   39篇
  2021年   54篇
  2020年   31篇
  2019年   47篇
  2018年   46篇
  2017年   43篇
  2016年   45篇
  2015年   59篇
  2014年   81篇
  2013年   113篇
  2012年   92篇
  2011年   144篇
  2010年   133篇
  2009年   101篇
  2008年   114篇
  2007年   99篇
  2006年   86篇
  2005年   76篇
  2004年   53篇
  2003年   66篇
  2002年   65篇
  2001年   27篇
  2000年   29篇
  1999年   34篇
  1998年   19篇
  1997年   8篇
  1996年   19篇
  1995年   14篇
  1994年   21篇
  1993年   19篇
  1992年   10篇
  1991年   12篇
  1990年   6篇
  1989年   5篇
  1988年   6篇
  1987年   4篇
  1986年   7篇
  1985年   5篇
  1982年   5篇
  1980年   2篇
  1979年   2篇
  1978年   3篇
  1977年   2篇
  1976年   6篇
  1975年   4篇
  1973年   2篇
  1967年   2篇
  1965年   2篇
排序方式: 共有1896条查询结果,搜索用时 15 毫秒
41.
Simulations of the interaction between a vortex and a NACA0012 airfoil are performed with a stable, high-order accurate (in space and time), multi-block finite difference solver for the compressible Navier-Stokes equations.We begin by computing a benchmark test case to validate the code. Next, the flow with steady inflow conditions are computed on several different grids. The resolution of the boundary layer as well as the amount of the artificial dissipation is studied to establish the necessary resolution requirements. We propose an accuracy test based on the weak imposition of the boundary conditions that does not require a grid refinement.Finally, we compute the vortex-airfoil interaction and calculate the lift and drag coefficients. It is shown that the viscous terms add the effect of detailed small scale structures to the lift and drag coefficients.  相似文献   
42.
We have implemented a compiler for key parts of Modelica, an object-oriented language supporting equation-based modeling and simulation of complex physical systems. The compiler is extensible, to support experiments with emerging tools for physical models. To achieve extensibility, the implementation is done declaratively in JastAdd, a metacompilation system supporting modern attribute grammar mechanisms such as reference attributes and nonterminal attributes.This paper reports on experiences from this implementation. For name and type analyses, we illustrate how declarative design strategies, originally developed for a Java compiler, could be reused to support Modelica’s advanced features of multiple inheritance and structural subtyping. Furthermore, we present new general design strategies for declarative generation of target ASTs from source ASTs. We illustrate how these strategies are used to resolve a generics-like feature of Modelica called modifications, and to support flattening, a fundamental part of Modelica compilation. To validate that the approach is practical, we have compared the execution speed of our compiler to two existing Modelica compilers.  相似文献   
43.
Company growth in a global setting causes challenges in the adaptation and maintenance of an organization’s methods. In this paper, we will analyze incremental method evolution in software product management in a global environment. We validate a method increment approach, based on method engineering principles, by applying it to a retrospective case study conducted at a large ERP vendor. The results show that the method increment types cover all increments that were found in the case study. Also, we identified the following lessons learned for company growth in a global software product management context: method increment drivers, such as the change of business strategy, vary during evolution; a shared infrastructure is critical for rollout; small increments facilitate gradual process improvement; and global involvement is critical. We then claim that method increments enable software companies to accommodate evolutionary adaptations of development process in agreement with the overall company expansion.  相似文献   
44.
45.
The major goal of the COSPAL project is to develop an artificial cognitive system architecture, with the ability to autonomously extend its capabilities. Exploratory learning is one strategy that allows an extension of competences as provided by the environment of the system. Whereas classical learning methods aim at best for a parametric generalization, i.e., concluding from a number of samples of a problem class to the problem class itself, exploration aims at applying acquired competences to a new problem class, and to apply generalization on a conceptual level, resulting in new models. Incremental or online learning is a crucial requirement to perform exploratory learning.In the COSPAL project, we mainly investigate reinforcement-type learning methods for exploratory learning, and in this paper we focus on the organization of cognitive systems for efficient operation. Learning is used over the entire system. It is organized in the form of four nested loops, where the outermost loop reflects the user-reinforcement-feedback loop, the intermediate two loops switch between different solution modes at symbolic respectively sub-symbolic level, and the innermost loop performs the acquired competences in terms of perception–action cycles. We present a system diagram which explains this process in more detail.We discuss the learning strategy in terms of learning scenarios provided by the user. This interaction between user (’teacher’) and system is a major difference to classical robotics systems, where the system designer places his world model into the system. We believe that this is the key to extendable robust system behavior and successful interaction of humans and artificial cognitive systems.We furthermore address the issue of bootstrapping the system, and, in particular, the visual recognition module. We give some more in-depth details about our recognition method and how feedback from higher levels is implemented. The described system is however work in progress and no final results are available yet. The available preliminary results that we have achieved so far, clearly point towards a successful proof of the architecture concept.  相似文献   
46.
Fixed-priority scheduling with deferred preemption (FPDS) has been proposed in the literature as a viable alternative to fixed-priority pre-emptive scheduling (FPPS), that obviates the need for non-trivial resource access protocols and reduces the cost of arbitrary preemptions. This paper shows that existing worst-case response time analysis of hard real-time tasks under FPDS, arbitrary phasing and relative deadlines at most equal to periods is pessimistic and/or optimistic. The same problem also arises for fixed-priority non-pre-emptive scheduling (FPNS), being a special case of FPDS. This paper provides a revised analysis, resolving the problems with the existing approaches. The analysis is based on known concepts of critical instant and busy period for FPPS. To accommodate for our scheduling model for FPDS, we need to slightly modify existing definitions of these concepts. The analysis assumes a continuous scheduling model, which is based on a partitioning of the timeline in a set of non-empty, right semi-open intervals. It is shown that the critical instant, longest busy period, and worst-case response time for a task are suprema rather than maxima for all tasks, except for the lowest priority task. Hence, that instant, period, and response time cannot be assumed for any task, except for the lowest priority task. Moreover, it is shown that the analysis is not uniform for all tasks, i.e. the analysis for the lowest priority task differs from the analysis of the other tasks. These anomalies for the lowest priority task are an immediate consequence of the fact that only the lowest priority task cannot be blocked. To build on earlier work, the worst-case response time analysis for FPDS is expressed in terms of known worst-case analysis results for FPPS. The paper includes pessimistic variants of the analysis, which are uniform for all tasks, illustrates the revised analysis for an advanced model for FPDS, where tasks are structured as flow graphs of subjobs rather than sequences, and shows that our analysis is sustainable.  相似文献   
47.
General Adaptive Neighborhood Choquet Image Filtering   总被引:1,自引:0,他引:1  
A novel framework entitled General Adaptive Neighborhood Image Processing (GANIP) has been recently introduced in order to propose an original image representation and mathematical structure for adaptive image processing and analysis. The central idea is based on the key notion of adaptivity which is simultaneously associated with the analyzing scales, the spatial structures and the intensity values of the image to be addressed. In this paper, the GANIP framework is briefly exposed and particularly studied in the context of Choquet filtering (using fuzzy measures), which generalizes a large class of image filters. The resulting spatially-adaptive operators are studied with respect to the general GANIP framework and illustrated in both the biomedical and materials application areas. In addition, the proposed GAN-based filters are practically applied and compared to several other denoising methods through experiments on image restoration, showing a high performance of the GAN-based Choquet filters.
Jean-Charles PinoliEmail:
  相似文献   
48.
The article explores approaches to discourses concerning age, with different agendas and national contexts. The Dialogue Seminar Method is introduced, as a means of facilitating reflection and access to tacit knowledge. Democratic dialogue requires orchestration, and enables horizontal communication and collective reflection.
Johan BerglundEmail:
  相似文献   
49.
In computational aero-acoustics, large-eddy simulations (LES) or direct numerical simulations (DNS) are often employed for flow computations in the source region. As part of the numerical implementation or required modeling, explicit spatial filters are frequently employed. For instance, in LES spatial filters are employed in the formulation of various subgrid-scale (SGS) models such as the dynamic model or the variational multi-scale (VMS) Smagorinsky model; both in LES or DNS, spatial high-pass filters are often used to remove undesired grid-to-grid oscillations. Though these type of spatial filters adhere to local accuracy requirements, in practice, they often destroy global conservation properties in the presence of non-periodic boundaries conditions. This leads to the incorrect prediction of the flow properties near hard boundaries, such as walls. In the current work, we present globally conservative high-order accurate filters, which combine traditional filters at the internal points with one-sided conservative filters near the wall boundary. We test these filters to remove grid-to-grid oscillations both in a channel-flow case and in 2D cavity flow. We find that the use of a non-conservative filter leads to erroneous predictions of the skin friction in channel flows up to 30%. In the cavity-flow simulations, the use of non-conservative filters to remove grid-to-grid oscillations leads to important shifts in the Strouhal number of the dominant mode, and a change of the flow pattern inside the cavity. In all cases, the use of conservative high-order filter formulations to remove grid-to-grid oscillations lead to very satisfactory results. Finally, in our channel-flow test case, we also illustrate the importance of using conservative filters for the formulation of the VMS Smagorinsky model.  相似文献   
50.
Crop identification on specific parcels and the assessment of soil management practices are important for agro-ecological studies, greenhouse gas modeling, and agrarian policy development. Traditional pixel-based analysis of remotely sensed data results in inaccurate identification of some crops due to pixel heterogeneity, mixed pixels, spectral similarity, and crop pattern variability. These problems can be overcome using object-based image analysis (OBIA) techniques, which incorporate new spectral, textural and hierarchical features after segmentation of imagery. We combined OBIA and decision tree (DT) algorithms to develop a methodology, named Object-based Crop Identification and Mapping (OCIM), for a multi-seasonal assessment of a large number of crop types and field status.In our approach, we explored several vegetation indices (VIs) and textural features derived from visible, near-infrared and short-wave infrared (SWIR) bands of ASTER satellite scenes collected during three distinct growing-season periods (mid-spring, early-summer and late-summer). OCIM was developed for 13 major crops cultivated in the agricultural area of Yolo County in California, USA. The model design was built in four different scenarios (combinations of three or two periods) by using two independent training and validation datasets and the best DTs resulted in an error rate of 9% for the three-period model and between 12 and 16% for the two-period models. Next, the selected DT was used for the thematic classification of the entire cropland area and mapping was then evaluated applying the confusion matrix method to the independent testing dataset that reported 79% overall accuracy. OCIM detected intra-class variations in most crops attributed to variability from local crop calendars, tree-orchard structures and land management operations. Spectral variables (based on VIs) contributed around 90% to the models, although textural variables were necessary to discriminate between most of the permanent crop-fields (orchards, vineyard, alfalfa and meadow). Features extracted from late-summer imagery contributed around 60% in classification model development, whereas mid-spring and early-summer imagery contributed around 30 and 10%, respectively. The Normalized Difference Vegetation Index (NDVI) was used to identify the main groups of crops based on the presence and vigor of green vegetation within the fields, contributing around 50% to the models. In addition, other VIs based on SWIR bands were also crucial to crop identification because of their potential to detect field properties like moisture, vegetation vigor, non-photosynthetic vegetation and bare soil. The OCIM method was built using interpretable rules based on physical properties of the crops studied and it was successful for object-based feature selection and crop identification.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号