首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   35841篇
  免费   1449篇
  国内免费   58篇
电工技术   366篇
综合类   28篇
化学工业   6954篇
金属工艺   723篇
机械仪表   737篇
建筑科学   1962篇
矿业工程   114篇
能源动力   1053篇
轻工业   2875篇
水利工程   429篇
石油天然气   109篇
武器工业   5篇
无线电   2459篇
一般工业技术   6105篇
冶金工业   6598篇
原子能技术   268篇
自动化技术   6563篇
  2023年   201篇
  2022年   173篇
  2021年   669篇
  2020年   461篇
  2019年   617篇
  2018年   779篇
  2017年   693篇
  2016年   832篇
  2015年   754篇
  2014年   1040篇
  2013年   2372篇
  2012年   1680篇
  2011年   2091篇
  2010年   1651篇
  2009年   1547篇
  2008年   1800篇
  2007年   1774篇
  2006年   1591篇
  2005年   1439篇
  2004年   1174篇
  2003年   1122篇
  2002年   1051篇
  2001年   702篇
  2000年   549篇
  1999年   595篇
  1998年   583篇
  1997年   574篇
  1996年   549篇
  1995年   572篇
  1994年   525篇
  1993年   510篇
  1992年   493篇
  1991年   282篇
  1990年   410篇
  1989年   387篇
  1988年   319篇
  1987年   353篇
  1986年   305篇
  1985年   414篇
  1984年   416篇
  1983年   317篇
  1982年   294篇
  1981年   281篇
  1980年   267篇
  1979年   270篇
  1978年   247篇
  1977年   225篇
  1976年   206篇
  1975年   194篇
  1974年   173篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
951.
The problem tackled in this article consists in associating perceived objects detected at a certain time with known objects previously detected, knowing uncertain and imprecise information regarding the association of each perceived objects with each known objects. For instance, this problem can occur during the association step of an obstacle tracking process, especially in the context of vehicle driving aid. A contribution in the modeling of this association problem in the belief function framework is introduced. By interpreting belief functions as weighted opinions according to the Transferable Belief Model semantics, pieces of information regarding the association of known objects and perceived objects can be expressed in a common global space of association to be combined by the conjunctive rule of combination, and a decision making process using the pignistic transformation can be made. This approach is validated on real data.  相似文献   
952.
We give polynomial-time, deterministic randomness extractors for sources generated in small space, where we model space s sources on {0,1}n as sources generated by width 2s branching programs. Specifically, there is a constant η>0 such that for any ζ>n?η, our algorithm extracts m=(δ?ζ)n bits that are exponentially close to uniform (in variation distance) from space s sources with min-entropy δn, where s=Ω(ζ3n). Previously, nothing was known for δ?1/2, even for space 0. Our results are obtained by a reduction to the class of total-entropy independent sources. This model generalizes both the well-studied models of independent sources and symbol-fixing sources. These sources consist of a set of r independent smaller sources over {0,1}?, where the total min-entropy over all the smaller sources is k. We give deterministic extractors for such sources when k is as small as polylog(r), for small enough ?.  相似文献   
953.
Remotely sensed vegetation indices are widely used to detect greening and browning trends; especially the global coverage of time-series normalized difference vegetation index (NDVI) data which are available from 1981. Seasonality and serial auto-correlation in the data have previously been dealt with by integrating the data to annual values; as an alternative to reducing the temporal resolution, we apply harmonic analyses and non-parametric trend tests to the GIMMS NDVI dataset (1981-2006). Using the complete dataset, greening and browning trends were analyzed using a linear model corrected for seasonality by subtracting the seasonal component, and a seasonal non-parametric model. In a third approach, phenological shift and variation in length of growing season were accounted for by analyzing the time-series using vegetation development stages rather than calendar days. Results differed substantially between the models, even though the input data were the same. Prominent regional greening trends identified by several other studies were confirmed but the models were inconsistent in areas with weak trends. The linear model using data corrected for seasonality showed similar trend slopes to those described in previous work using linear models on yearly mean values. The non-parametric models demonstrated the significant influence of variations in phenology; accounting for these variations should yield more robust trend analyses and better understanding of vegetation trends.  相似文献   
954.
The National Land Cover Database (NLCD) 2001 Alaska land cover classification is the first 30-m resolution land cover product available covering the entire state of Alaska. The accuracy assessment of the NLCD 2001 Alaska land cover classification employed a geographically stratified three-stage sampling design to select the reference sample of pixels. Reference land cover class labels were determined via fixed wing aircraft, as the high resolution imagery used for determining the reference land cover classification in the conterminous U.S. was not available for most of Alaska. Overall thematic accuracy for the Alaska NLCD was 76.2% (s.e. 2.8%) at Level II (12 classes evaluated) and 83.9% (s.e. 2.1%) at Level I (6 classes evaluated) when agreement was defined as a match between the map class and either the primary or alternate reference class label. When agreement was defined as a match between the map class and primary reference label only, overall accuracy was 59.4% at Level II and 69.3% at Level I. The majority of classification errors occurred at Level I of the classification hierarchy (i.e., misclassifications were generally to a different Level I class, not to a Level II class within the same Level I class). Classification accuracy was higher for more abundant land cover classes and for pixels located in the interior of homogeneous land cover patches.  相似文献   
955.
Schema integration aims to create a mediated schema as a unified representation of existing heterogeneous sources sharing a common application domain. These sources have been increasingly written in XML due to its versatility and expressive power. Unfortunately, these sources often use different elements and structures to express the same concepts and relations, thus causing substantial semantic and structural conflicts. Such a challenge impedes the creation of high-quality mediated schemas and has not been adequately addressed by existing integration methods. In this paper, we propose a novel method, named XINTOR, for automating the integration of heterogeneous schemas. Given a set of XML sources and a set of correspondences between the source schemas, our method aims to create a complete and minimal mediated schema: it completely captures all of the concepts and relations in the sources without duplication, provided that the concepts do not overlap. Our contributions are fourfold. First, we resolve structural conflicts inherent in the source schemas. Second, we introduce a new statistics-based measure, called path cohesion, for selecting concepts and relations to be a part of the mediated schema. The path cohesion is statistically computed based on multiple path quality dimensions such as average path length and path frequency. Third, we resolve semantic conflicts by augmenting the semantics of similar concepts with context-dependent information. Finally, we propose a novel double-layered mediated schema to retain a wider range of concepts and relations than existing mediated schemas, which are at best either complete or minimal, but not both. Performed on both real and synthetic datasets, our experimental results show that XINTOR outperforms existing methods with respect to (i) the mediated-schema quality using precision, recall, F-measure, and schema minimality; and (ii) the execution performance based on execution time and scale-up performance.  相似文献   
956.
We introduce a fully automatic algorithm which optimizes the high‐level structure of a given quadrilateral mesh to achieve a coarser quadrangular base complex. Such a topological optimization is highly desirable, since state‐of‐the‐art quadrangulation techniques lead to meshes which have an appropriate singularity distribution and an anisotropic element alignment, but usually they are still far away from the high‐level structure which is typical for carefully designed meshes manually created by specialists and used e.g. in animation or simulation. In this paper we show that the quality of the high‐level structure is negatively affected by helical configurations within the quadrilateral mesh. Consequently we present an algorithm which detects helices and is able to remove most of them by applying a novel grid preserving simplification operator (GP‐operator) which is guaranteed to maintain an all‐quadrilateral mesh. Additionally it preserves the given singularity distribution and in particular does not introduce new singularities. For each helix we construct a directed graph in which cycles through the start vertex encode operations to remove the corresponding helix. Therefore a simple graph search algorithm can be performed iteratively to remove as many helices as possible and thus improve the high‐level structure in a greedy fashion. We demonstrate the usefulness of our automatic structure optimization technique by showing several examples with varying complexity.  相似文献   
957.
This paper proposes a complete framework to assess the overall performance of classification models from a user perspective in terms of accuracy, comprehensibility, and justifiability. A review is provided of accuracy and comprehensibility measures, and a novel metric is introduced that allows one to measure the justifiability of classification models. Furthermore, taxonomy of domain constraints is introduced, and an overview of the existing approaches to impose constraints and include domain knowledge in data mining techniques is presented. Finally, justifiability metric is applied to a credit scoring and customer churn prediction case.  相似文献   
958.
The main focus of this paper is a pair of new approximation algorithms for certain integer programs. First, for covering integer programs {min cx:Axb,0xd} where A has at most k nonzeroes per row, we give a k-approximation algorithm. (We assume A,b,c,d are nonnegative.) For any k≥2 and ε>0, if P≠NP this ratio cannot be improved to k−1−ε, and under the unique games conjecture this ratio cannot be improved to kε. One key idea is to replace individual constraints by others that have better rounding properties but the same nonnegative integral solutions; another critical ingredient is knapsack-cover inequalities. Second, for packing integer programs {max cx:Axb,0xd} where A has at most k nonzeroes per column, we give a (2k 2+2)-approximation algorithm. Our approach builds on the iterated LP relaxation framework. In addition, we obtain improved approximations for the second problem when k=2, and for both problems when every A ij is small compared to b i . Finally, we demonstrate a 17/16-inapproximability for covering integer programs with at most two nonzeroes per column.  相似文献   
959.
Residential segregation is an inherently spatial phenomenon as it measures the separation of different types of people within a region. Whether measured with an explicitly spatial index, or a classic aspatial index, a region’s underlying spatial properties could manifest themselves in the magnitude of measured segregation. In this paper we implement a Monte Carlo simulation approach to investigate the properties of four segregation indices in regions built with specific spatial properties. This approach allows us to control the experiment in ways that empirical data do not. In general we confirm the expected results for the indices under various spatial properties, but some unexpected results emerge. Both the Dissimilarity Index and Neighborhood Sorting Index are sensitive to region size, but their spatial counterparts, the Adjusted Dissimilarity Index and Generalized Neighborhood Sorting Index, are generally immune to this problem. The paper also lends weight to concerns about the downward pressure on measured segregation when multiple neighborhoods are grouped into a single census tract. Finally, we discuss concerns about the way space is incorporated into segregation indices since the expected value of the spatial indices tested is lower than their aspatial counterparts.  相似文献   
960.
Players of epistemic games – computer games that simulate professional practica – have been shown to develop epistemic frames: a profession's particular way of seeing and solving problems. This study examined the interactions between players and mentors in one epistemic game, Urban Science. Using a new method called epistemic network analysis, we explored how players develop epistemic frames through playing the game. Our results show that players imitate and internalize the professional way of thinking that the mentors model, suggesting that mentors can effectively model epistemic frames, and that epistemic network analysis is a useful way to chart the development of learning through mentoring relationships.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号