首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1760篇
  免费   115篇
  国内免费   4篇
电工技术   22篇
综合类   4篇
化学工业   403篇
金属工艺   21篇
机械仪表   19篇
建筑科学   103篇
矿业工程   3篇
能源动力   78篇
轻工业   191篇
水利工程   10篇
石油天然气   3篇
无线电   184篇
一般工业技术   351篇
冶金工业   86篇
原子能技术   13篇
自动化技术   388篇
  2023年   19篇
  2022年   30篇
  2021年   53篇
  2020年   30篇
  2019年   47篇
  2018年   46篇
  2017年   43篇
  2016年   44篇
  2015年   59篇
  2014年   81篇
  2013年   113篇
  2012年   92篇
  2011年   145篇
  2010年   136篇
  2009年   102篇
  2008年   115篇
  2007年   102篇
  2006年   86篇
  2005年   76篇
  2004年   54篇
  2003年   65篇
  2002年   62篇
  2001年   27篇
  2000年   30篇
  1999年   34篇
  1998年   17篇
  1997年   7篇
  1996年   20篇
  1995年   14篇
  1994年   21篇
  1993年   19篇
  1992年   10篇
  1991年   10篇
  1990年   6篇
  1989年   5篇
  1988年   6篇
  1987年   3篇
  1986年   7篇
  1985年   5篇
  1984年   2篇
  1983年   2篇
  1982年   5篇
  1980年   2篇
  1979年   2篇
  1978年   3篇
  1976年   5篇
  1975年   4篇
  1973年   2篇
  1967年   2篇
  1965年   2篇
排序方式: 共有1879条查询结果,搜索用时 15 毫秒
41.
Company growth in a global setting causes challenges in the adaptation and maintenance of an organization’s methods. In this paper, we will analyze incremental method evolution in software product management in a global environment. We validate a method increment approach, based on method engineering principles, by applying it to a retrospective case study conducted at a large ERP vendor. The results show that the method increment types cover all increments that were found in the case study. Also, we identified the following lessons learned for company growth in a global software product management context: method increment drivers, such as the change of business strategy, vary during evolution; a shared infrastructure is critical for rollout; small increments facilitate gradual process improvement; and global involvement is critical. We then claim that method increments enable software companies to accommodate evolutionary adaptations of development process in agreement with the overall company expansion.  相似文献   
42.
43.
The major goal of the COSPAL project is to develop an artificial cognitive system architecture, with the ability to autonomously extend its capabilities. Exploratory learning is one strategy that allows an extension of competences as provided by the environment of the system. Whereas classical learning methods aim at best for a parametric generalization, i.e., concluding from a number of samples of a problem class to the problem class itself, exploration aims at applying acquired competences to a new problem class, and to apply generalization on a conceptual level, resulting in new models. Incremental or online learning is a crucial requirement to perform exploratory learning.In the COSPAL project, we mainly investigate reinforcement-type learning methods for exploratory learning, and in this paper we focus on the organization of cognitive systems for efficient operation. Learning is used over the entire system. It is organized in the form of four nested loops, where the outermost loop reflects the user-reinforcement-feedback loop, the intermediate two loops switch between different solution modes at symbolic respectively sub-symbolic level, and the innermost loop performs the acquired competences in terms of perception–action cycles. We present a system diagram which explains this process in more detail.We discuss the learning strategy in terms of learning scenarios provided by the user. This interaction between user (’teacher’) and system is a major difference to classical robotics systems, where the system designer places his world model into the system. We believe that this is the key to extendable robust system behavior and successful interaction of humans and artificial cognitive systems.We furthermore address the issue of bootstrapping the system, and, in particular, the visual recognition module. We give some more in-depth details about our recognition method and how feedback from higher levels is implemented. The described system is however work in progress and no final results are available yet. The available preliminary results that we have achieved so far, clearly point towards a successful proof of the architecture concept.  相似文献   
44.
Fixed-priority scheduling with deferred preemption (FPDS) has been proposed in the literature as a viable alternative to fixed-priority pre-emptive scheduling (FPPS), that obviates the need for non-trivial resource access protocols and reduces the cost of arbitrary preemptions. This paper shows that existing worst-case response time analysis of hard real-time tasks under FPDS, arbitrary phasing and relative deadlines at most equal to periods is pessimistic and/or optimistic. The same problem also arises for fixed-priority non-pre-emptive scheduling (FPNS), being a special case of FPDS. This paper provides a revised analysis, resolving the problems with the existing approaches. The analysis is based on known concepts of critical instant and busy period for FPPS. To accommodate for our scheduling model for FPDS, we need to slightly modify existing definitions of these concepts. The analysis assumes a continuous scheduling model, which is based on a partitioning of the timeline in a set of non-empty, right semi-open intervals. It is shown that the critical instant, longest busy period, and worst-case response time for a task are suprema rather than maxima for all tasks, except for the lowest priority task. Hence, that instant, period, and response time cannot be assumed for any task, except for the lowest priority task. Moreover, it is shown that the analysis is not uniform for all tasks, i.e. the analysis for the lowest priority task differs from the analysis of the other tasks. These anomalies for the lowest priority task are an immediate consequence of the fact that only the lowest priority task cannot be blocked. To build on earlier work, the worst-case response time analysis for FPDS is expressed in terms of known worst-case analysis results for FPPS. The paper includes pessimistic variants of the analysis, which are uniform for all tasks, illustrates the revised analysis for an advanced model for FPDS, where tasks are structured as flow graphs of subjobs rather than sequences, and shows that our analysis is sustainable.  相似文献   
45.
General Adaptive Neighborhood Choquet Image Filtering   总被引:1,自引:0,他引:1  
A novel framework entitled General Adaptive Neighborhood Image Processing (GANIP) has been recently introduced in order to propose an original image representation and mathematical structure for adaptive image processing and analysis. The central idea is based on the key notion of adaptivity which is simultaneously associated with the analyzing scales, the spatial structures and the intensity values of the image to be addressed. In this paper, the GANIP framework is briefly exposed and particularly studied in the context of Choquet filtering (using fuzzy measures), which generalizes a large class of image filters. The resulting spatially-adaptive operators are studied with respect to the general GANIP framework and illustrated in both the biomedical and materials application areas. In addition, the proposed GAN-based filters are practically applied and compared to several other denoising methods through experiments on image restoration, showing a high performance of the GAN-based Choquet filters.
Jean-Charles PinoliEmail:
  相似文献   
46.
The article explores approaches to discourses concerning age, with different agendas and national contexts. The Dialogue Seminar Method is introduced, as a means of facilitating reflection and access to tacit knowledge. Democratic dialogue requires orchestration, and enables horizontal communication and collective reflection.
Johan BerglundEmail:
  相似文献   
47.
In computational aero-acoustics, large-eddy simulations (LES) or direct numerical simulations (DNS) are often employed for flow computations in the source region. As part of the numerical implementation or required modeling, explicit spatial filters are frequently employed. For instance, in LES spatial filters are employed in the formulation of various subgrid-scale (SGS) models such as the dynamic model or the variational multi-scale (VMS) Smagorinsky model; both in LES or DNS, spatial high-pass filters are often used to remove undesired grid-to-grid oscillations. Though these type of spatial filters adhere to local accuracy requirements, in practice, they often destroy global conservation properties in the presence of non-periodic boundaries conditions. This leads to the incorrect prediction of the flow properties near hard boundaries, such as walls. In the current work, we present globally conservative high-order accurate filters, which combine traditional filters at the internal points with one-sided conservative filters near the wall boundary. We test these filters to remove grid-to-grid oscillations both in a channel-flow case and in 2D cavity flow. We find that the use of a non-conservative filter leads to erroneous predictions of the skin friction in channel flows up to 30%. In the cavity-flow simulations, the use of non-conservative filters to remove grid-to-grid oscillations leads to important shifts in the Strouhal number of the dominant mode, and a change of the flow pattern inside the cavity. In all cases, the use of conservative high-order filter formulations to remove grid-to-grid oscillations lead to very satisfactory results. Finally, in our channel-flow test case, we also illustrate the importance of using conservative filters for the formulation of the VMS Smagorinsky model.  相似文献   
48.
Crop identification on specific parcels and the assessment of soil management practices are important for agro-ecological studies, greenhouse gas modeling, and agrarian policy development. Traditional pixel-based analysis of remotely sensed data results in inaccurate identification of some crops due to pixel heterogeneity, mixed pixels, spectral similarity, and crop pattern variability. These problems can be overcome using object-based image analysis (OBIA) techniques, which incorporate new spectral, textural and hierarchical features after segmentation of imagery. We combined OBIA and decision tree (DT) algorithms to develop a methodology, named Object-based Crop Identification and Mapping (OCIM), for a multi-seasonal assessment of a large number of crop types and field status.In our approach, we explored several vegetation indices (VIs) and textural features derived from visible, near-infrared and short-wave infrared (SWIR) bands of ASTER satellite scenes collected during three distinct growing-season periods (mid-spring, early-summer and late-summer). OCIM was developed for 13 major crops cultivated in the agricultural area of Yolo County in California, USA. The model design was built in four different scenarios (combinations of three or two periods) by using two independent training and validation datasets and the best DTs resulted in an error rate of 9% for the three-period model and between 12 and 16% for the two-period models. Next, the selected DT was used for the thematic classification of the entire cropland area and mapping was then evaluated applying the confusion matrix method to the independent testing dataset that reported 79% overall accuracy. OCIM detected intra-class variations in most crops attributed to variability from local crop calendars, tree-orchard structures and land management operations. Spectral variables (based on VIs) contributed around 90% to the models, although textural variables were necessary to discriminate between most of the permanent crop-fields (orchards, vineyard, alfalfa and meadow). Features extracted from late-summer imagery contributed around 60% in classification model development, whereas mid-spring and early-summer imagery contributed around 30 and 10%, respectively. The Normalized Difference Vegetation Index (NDVI) was used to identify the main groups of crops based on the presence and vigor of green vegetation within the fields, contributing around 50% to the models. In addition, other VIs based on SWIR bands were also crucial to crop identification because of their potential to detect field properties like moisture, vegetation vigor, non-photosynthetic vegetation and bare soil. The OCIM method was built using interpretable rules based on physical properties of the crops studied and it was successful for object-based feature selection and crop identification.  相似文献   
49.
Dagstuhl seminar no. 10102 on discrete event logistic systems recognized a network of persistent models to be a “Grand Challenge.” Such on-line model network will offer an infrastructure that facilitates the management of logistic operations. This ambition to create a network of persistent models implies a radical shift for model design activities as the objective is an infrastructure rather than application-specific solutions. In particular, model developers can no longer assume that they know what their model will be used for. It is no longer possible to design for the expected.This paper presents insights in model development and design in the absence of precise knowledge concerning a model's usage. Basically, model developers may solely rely on the presence of the real-world counterpart mirrored by their model and a general idea about the nature of the application (e.g. coordination of logistic operations). When the invariants of their real-world counterpart suffice for models to be valid, these models become reusable and integrate-able. As these models remain valid under a wide range of situations, they become multi-purpose and durable resources rather than single-purpose short-lived components or legacy, which is even worse.Moreover and more specifically, the paper describes how to build models that allow their users to generate predictions in unexpected situations and atypical conditions. Referring to previous work, the paper concisely discusses how these predictions can be generated starting from the models. This prediction-generating technology is currently being transferred into an industrial MES.  相似文献   
50.
An important objective of data mining is the development of predictive models. Based on a number of observations, a model is constructed that allows the analysts to provide classifications or predictions for new observations. Currently, most research focuses on improving the accuracy or precision of these models and comparatively little research has been undertaken to increase their comprehensibility to the analyst or end-user. This is mainly due to the subjective nature of ‘comprehensibility’, which depends on many factors outside the model, such as the user's experience and his/her prior knowledge. Despite this influence of the observer, some representation formats are generally considered to be more easily interpretable than others. In this paper, an empirical study is presented which investigates the suitability of a number of alternative representation formats for classification when interpretability is a key requirement. The formats under consideration are decision tables, (binary) decision trees, propositional rules, and oblique rules. An end-user experiment was designed to test the accuracy, response time, and answer confidence for a set of problem-solving tasks involving the former representations. Analysis of the results reveals that decision tables perform significantly better on all three criteria, while post-test voting also reveals a clear preference of users for decision tables in terms of ease of use.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号