首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1722篇
  免费   115篇
  国内免费   4篇
电工技术   22篇
综合类   4篇
化学工业   384篇
金属工艺   21篇
机械仪表   19篇
建筑科学   101篇
矿业工程   3篇
能源动力   79篇
轻工业   177篇
水利工程   10篇
石油天然气   3篇
无线电   183篇
一般工业技术   351篇
冶金工业   86篇
原子能技术   13篇
自动化技术   385篇
  2023年   19篇
  2022年   15篇
  2021年   52篇
  2020年   30篇
  2019年   47篇
  2018年   46篇
  2017年   43篇
  2016年   43篇
  2015年   59篇
  2014年   81篇
  2013年   112篇
  2012年   92篇
  2011年   145篇
  2010年   133篇
  2009年   101篇
  2008年   113篇
  2007年   98篇
  2006年   85篇
  2005年   76篇
  2004年   51篇
  2003年   64篇
  2002年   61篇
  2001年   26篇
  2000年   27篇
  1999年   35篇
  1998年   17篇
  1997年   7篇
  1996年   19篇
  1995年   14篇
  1994年   21篇
  1993年   19篇
  1992年   10篇
  1991年   10篇
  1990年   6篇
  1989年   5篇
  1988年   6篇
  1987年   3篇
  1986年   7篇
  1985年   5篇
  1984年   2篇
  1983年   2篇
  1982年   5篇
  1980年   2篇
  1979年   2篇
  1978年   3篇
  1976年   5篇
  1975年   4篇
  1973年   2篇
  1967年   2篇
  1965年   2篇
排序方式: 共有1841条查询结果,搜索用时 203 毫秒
51.
An important objective of data mining is the development of predictive models. Based on a number of observations, a model is constructed that allows the analysts to provide classifications or predictions for new observations. Currently, most research focuses on improving the accuracy or precision of these models and comparatively little research has been undertaken to increase their comprehensibility to the analyst or end-user. This is mainly due to the subjective nature of ‘comprehensibility’, which depends on many factors outside the model, such as the user's experience and his/her prior knowledge. Despite this influence of the observer, some representation formats are generally considered to be more easily interpretable than others. In this paper, an empirical study is presented which investigates the suitability of a number of alternative representation formats for classification when interpretability is a key requirement. The formats under consideration are decision tables, (binary) decision trees, propositional rules, and oblique rules. An end-user experiment was designed to test the accuracy, response time, and answer confidence for a set of problem-solving tasks involving the former representations. Analysis of the results reveals that decision tables perform significantly better on all three criteria, while post-test voting also reveals a clear preference of users for decision tables in terms of ease of use.  相似文献   
52.
3D model alignment is an important step for applications such as 3D model retrieval and 3D model recognition. In this paper, we propose a novel Minimum Projection Area-based (MPA) alignment method for pose normalization. Our method finds three principal axes to align a model: the first principal axis gives the minimum projection area when we perform an orthographic projection of the model in the direction parallel to this axis, the second axis is perpendicular to the first axis and gives the minimum projection area, and the third axis is the cross product of the first two axes. We devise an optimization method based on Particle Swarm Optimization to efficiently find the axis with minimum projection area. For application in retrieval, we further perform axis ordering and orientation in order to align similar models in similar poses. We have tested MPA on several standard databases which include rigid/non-rigid and open/watertight models. Experimental results demonstrate that MPA has a good performance in finding alignment axes which are parallel to the ideal canonical coordinate frame of models and aligning similar models in similar poses under different conditions such as model variations, noise, and initial poses. In addition, it achieves a better 3D model retrieval performance than several commonly used approaches such as CPCA, NPCA, and PCA.  相似文献   
53.
Suppose a graph G is given with two vertex-disjoint sets of vertices Z1 and Z2. Can we partition the remaining vertices of G such that we obtain two connected vertex-disjoint subgraphs of G that contain Z1 and Z2, respectively? This problem is known as the 2-Disjoint Connected Subgraphs problem. It is already NP-complete for the class of n-vertex graphs G=(V,E) in which Z1 and Z2 each contain a connected set that dominates all vertices in V?(Z1Z2). We present an O(1.2051n) time algorithm that solves it for this graph class. As a consequence, we can also solve this problem in O(1.2051n) time for the classes of n-vertex P6-free graphs and split graphs. This is an improvement upon a recent O(1.5790n) time algorithm for these two classes. Our approach translates the problem to a generalized version of hypergraph 2-coloring and combines inclusion/exclusion with measure and conquer.  相似文献   
54.
One of the most critical areas in the manufacturing process for FPD panels or shadow masks for CRTs is lithography. Most existing lithography technologies require high‐quality large‐area photomasks. The requirements on these photomasks include positioning accuracy (registration) and repeatability (overlay), systematic image quality errors (“mura” or display quality), and resolution (minimum feature size). The general trend toward higher resolution and improved performance, e.g., for TFT desktop monitors, has put a strong focus on the specifications for large‐area‐display photomasks. This article intends to give an overview of the dominant issues for large‐area‐display photomasks, and illustrates differences compared with other applications. The article will also present state‐of‐the‐art methods and trends. In particular, the aspects of positioning accuracy over large areas and systematic image‐quality errors will be described. New qualitative and objective methods have been developed as means to capture systematic image‐quality errors. Results indicating that errors below 25 nm can be found early in the manufacturing process is presented, thus allowing inspection for visual effects before the actual display is completed. Positioning accuracy below 400 nm (3 sigma) over 720 × 560 mm have been achieved. These results will in the future be extended up toward 1 × 1 m for generation 4 in TFT‐LCD production.  相似文献   
55.
We summarise our experiences of a number of demonstrators and simulation experiments designed to test the feasibility of using artificial decision making agents in real-time domains, and comment on the significance of our results to autonomous artificial agent action patterns in markets. Our main hypothesis is that the use of norms can extend the capability of artificial decision makers beyond what is obtained from implementing individual utility maximizers in keeping with rational choice theory.  相似文献   
56.
The medical community is producing and manipulating a tremendous volume of digital data for which computerized archiving, processing and analysis is needed. Grid infrastructures are promising for dealing with challenges arising in computerized medicine but the manipulation of medical data on such infrastructures faces both the problem of interconnecting medical information systems to Grid middlewares and of preserving patients’ privacy in a wide and distributed multi-user system. These constraints are often limiting the use of Grids for manipulating sensitive medical data. This paper describes our design of a medical data management system taking advantage of the advanced gLite data management services, developed in the context of the EGEE project, to fulfill the stringent needs of the medical community. It ensures medical data protection through strict data access control, anonymization and encryption. The multi-level access control provides the flexibility needed for implementing complex medical use-cases. Data anonymization prevents the exposure of most sensitive data to unauthorized users, and data encryption guarantees data protection even when it is stored at remote sites. Moreover, the developed prototype provides a Grid storage resource manager (SRM) interface to standard medical DICOM servers thereby enabling transparent access to medical data without interfering with medical practice.  相似文献   
57.
This paper describes experiences from implementing key parts of a compiler for Modelica, an object-oriented language supporting declarative modeling and simulation of complex physical systems. Our implementation uses the attribute-grammar based tool JastAdd. In particular, we discuss the implementation of Modelica name analysis which is highly context-dependent, type analysis which is based on structural subtyping, and flattening which is a fundamental part of the Modelica compilation process.of so called modifications, Modelica.  相似文献   
58.
When developing packaged software, which is sold ‘off-the-shelf’ on a worldwide marketplace, it is essential to collect needs and opportunities from different market segments and use this information in the prioritisation of requirements for the next software release. This paper presents an industrial case study where a distributed prioritisation process is proposed, observed and evaluated. The stakeholders in the requirements prioritisation process include marketing offices distributed around the world. A major objective of the distributed prioritisation is to gather and highlight the differences and similarities in the requirement priorities of the different market segments. The evaluation through questionnaires shows that the stakeholders found the process useful. The paper also presents novel approaches to visualise the priority distribution among stakeholders, together with measures on disagreement and satisfaction. Product management found the proposed charts valuable as decision support when selecting requirements for the next release, as they revealed unforeseen differences among stakeholder priorities. Conclusions on stakeholder tactics are provided and issues of further research are identified, including ways of addressing identified challenges.  相似文献   
59.
60.
In the past few years, we have witnessed the proliferation of a heterogeneous ecosystem of cloud providers, each one with a different infrastructure offer and pricing policy. We explore this heterogeneity in a novel cloud brokering approach that optimizes placement of virtual infrastructures across multiple clouds and also abstracts the deployment and management of infrastructure components in these clouds. The feasibility of our approach is evaluated in a high throughput computing cluster case study. Experimental results confirm that multi-cloud deployment provides better performance and lower costs compared to the usage of a single cloud only.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号