首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   6194篇
  免费   38篇
  国内免费   1篇
电工技术   45篇
综合类   2篇
化学工业   539篇
金属工艺   36篇
机械仪表   59篇
建筑科学   139篇
矿业工程   5篇
能源动力   58篇
轻工业   218篇
水利工程   28篇
石油天然气   2篇
无线电   361篇
一般工业技术   467篇
冶金工业   3888篇
原子能技术   35篇
自动化技术   351篇
  2021年   31篇
  2019年   20篇
  2018年   40篇
  2017年   36篇
  2016年   32篇
  2015年   19篇
  2014年   47篇
  2013年   164篇
  2012年   67篇
  2011年   99篇
  2010年   83篇
  2009年   80篇
  2008年   102篇
  2007年   93篇
  2006年   78篇
  2005年   89篇
  2004年   75篇
  2003年   79篇
  2002年   68篇
  2001年   71篇
  2000年   67篇
  1999年   161篇
  1998年   1208篇
  1997年   659篇
  1996年   472篇
  1995年   309篇
  1994年   237篇
  1993年   239篇
  1992年   77篇
  1991年   86篇
  1990年   94篇
  1989年   87篇
  1988年   84篇
  1987年   98篇
  1986年   91篇
  1985年   73篇
  1984年   43篇
  1983年   41篇
  1982年   40篇
  1981年   56篇
  1980年   59篇
  1979年   36篇
  1978年   26篇
  1977年   104篇
  1976年   190篇
  1975年   29篇
  1974年   24篇
  1973年   18篇
  1972年   19篇
  1971年   18篇
排序方式: 共有6233条查询结果,搜索用时 15 毫秒
91.
Benchmarking attribute selection techniques for discrete class data mining   总被引:9,自引:0,他引:9  
Data engineering is generally considered to be a central issue in the development of data mining applications. The success of many learning schemes, in their attempts to construct models of data, hinges on the reliable identification of a small set of highly predictive attributes. The inclusion of irrelevant, redundant, and noisy attributes in the model building process phase can result in poor predictive performance and increased computation. Attribute selection generally involves a combination of search and attribute utility estimation plus evaluation with respect to specific learning schemes. This leads to a large number of possible permutations and has led to a situation where very few benchmark studies have been conducted. This paper presents a benchmark comparison of several attribute selection methods for supervised classification. All the methods produce an attribute ranking, a useful devise for isolating the individual merit of an attribute. Attribute selection is achieved by cross-validating the attribute rankings with respect to a classification learner to find the best attributes. Results are reported for a selection of standard data sets and two diverse learning schemes C4.5 and naive Bayes.  相似文献   
92.
The historical record of in situ measurements of the terminus positions of the Pasterze and Kleines Fleißkees glaciers in the eastern Alps of Austria is used to assess uncertainties in the measurement of decadal scale changes using satellite data. Topographic maps beginning in 1893, and satellite data from 1976 to 2001, were studied in concert with ground measurements to measure glacier changes. Ground measurements show that the tongue of the Pasterze Glacier receded ∼1150 m from 1893 to 2001, while satellite-derived measurements, using August 2001 Landsat Enhanced Thematic Mapper Plus (ETM+) data registered to an 1893 topographic map, show a recession of 1300-1800 m, with an unknown error. The measurement accuracy depends on the registration technique and the pixel resolution of the sensor when two satellite images are used. When using topographic maps, an additional source of error is the accuracy of the glacier position shown on the map. Between 1976 and 2001, Landsat-derived measurements show a recession of the terminus of the Pasterze Glacier of 479±136 m (at an average rate of 19.1 m a−1) while measurements from the ground showed a recession of 428 m (at an average rate of 17.1 m a−1). Four-meter resolution Ikonos satellite images from 2000 and 2001 reveal a shrinkage of 22,096±46 m2 in the Pasterze tongue. The nearby Kleines Fleißkees glacier lost 30% of its area between 1984 and 2001, and the area of exposed ice increased by 0.44±0.0023 km2, according to Landsat satellite measurements. As more recent satellite images are utilized, especially data that are geocoded, the uncertainty associated with measuring glacier changes has decreased. It is not possible to assess the uncertainty when an old topographic map and a satellite image are coregistered.  相似文献   
93.
Lot streaming involves splitting a production lot into a number of sublots, in order to allow the overlapping of successive operations, in multi-machine manufacturing systems. In no-wait flowshop scheduling, sublots are necessarily consistent, that is, they remain the same over all machines. The benefits of lot streaming include reductions in lead times and work-in-process, and increases in machine utilization rates. We study the problem of minimizing the makespan in no-wait flowshops producing multiple products with attached setup times, using lot streaming. Our study of the single product problem resolves an open question from the lot streaming literature. The intractable multiple product problem requires finding the optimal number of sublots, sublot sizes, and a product sequence for each machine. We develop a dynamic programming algorithm to generate all the nondominated schedule profiles for each product that are required to formulate the flowshop problem as a generalized traveling salesman problem. This problem is equivalent to a classical traveling salesman problem with a pseudopolynomial number of cities. We develop and computationally test an efficient heuristic for this problem. Our results indicate that solutions can quickly be found for flowshops with up to 10 machines and 50 products. Moreover, the solutions found by our heuristic provide a substantial improvement over previously published results.  相似文献   
94.
A new interconnection network is proposed for the construction of a massively parallel computer system. The systematic construction of this interconnection network, denoted RCC-FULL, is performed by methodically connecting together a number of basic atoms where a basic atom is a set of fully interconnected nodes. Key communication characteristics are derived and evaluated for RCC-FULL and efficient routing algorithms, which need only local information to route messages between any two nodes, are also derived. AnO(log (N)) sorting algorithm is shown for RCC-FULL and RCC-FULL is shown to emulate deterministically the CRCW PRAM model, with onlyO(log (N)) degradation in time performance. Finally, the hardware cost for the RCC-FULL is estimated as a function of its pin requirements and compared to that of the binary hypercube and most instances of RCC-FULL have substantially lower cost. Hence, RCC-FULL appears to be a particularly effective network for PRAM emulation, and might be considered as a universal network for future supercomputing systems.  相似文献   
95.
A parallel, unstructured, high-order discontinuous Galerkin method is developed for the time-dependent Maxwell's equations, using simple monomial polynomials for spatial discretization and a fourth-order Runge–Kutta scheme for time marching. Scattering results for a number of validation cases are computed employing polynomials of up to third order. Accurate solutions are obtained on coarse meshes and grid convergence is achieved, demonstrating the capabilities of the scheme for time-domain electromagnetic wave scattering simulations.  相似文献   
96.
This paper represents a first attempt at a systematic study of sensitivity analysis for scheduling problems. Because schedules contain both combinatorial and temporal structures, scheduling problems present unique issues for sensitivity analysis. Some of the issues that we discuss have not been considered before. Others, while studied before, have not been explored in the context of scheduling. The applicability of these issues is illustrated using well-known scheduling models. We provide fast methods to determine when a previously optimal schedule remains optimal. Other methods restore an optimal schedule after a parameter change. The value of studying the sensitivity of an optimal sequence instead of the sensitivity of an optimal schedule is demonstrated. We show that, for some problems, sensitivity analysis results depend on the positions of jobs with changed parameters. We identify scheduling problems where performing additional or different computations during optimization facilitates sensitivity analysis. To improve the robustness of an optimal schedule, selection among multiple optimal schedules is considered. We discuss which types of sensitivity analysis questions are intractable because the scheduling problem itself is intractable. We also study how heuristic error bounds vary when the data of a scheduling problem is continuously modified. Although we focus on scheduling problems, several of the issues we discuss and our classification scheme can be extended to other optimization problems.  相似文献   
97.
Starch being a transparent crystal often give images which are difficult to precisely define with the light microscope due to the diffraction and other effects such as internal structure which may appear as a surface phenomena. The scanning electron microscope (SEM); however, gives only surface detail. In a effort to differentiate between surface and internal details, the same starch granules have been studied by both ordinary light and scanning electron microscopy. In each case the granules were held in the same configuration as was seen with the light microscope when they were studied by SEM. In this way a direct comparison could be made between granules viewed by each microscopy technique. From such comparisons it is possible to determine the starch details that are actually due to internal features. The results for canna, potato and corn starches are given.  相似文献   
98.
Severe burn injury leads to a cascade of local and systemic immune responses that trigger an extreme state of immune dysfunction, leaving the patient highly susceptible to acute and chronic infection. When combined with inhalation injury, burn patients have higher mortality and a greater chance of developing secondary respiratory complications including infection. No animal model of combined burn and inhalation injury (B+I) exists that accurately mirrors the human clinical picture, nor are there any effective immunotherapies or predictive models of the risk of immune dysfunction. Our earlier work showed that the mechanistic/mammalian target of rapamycin (mTOR) pathway is activated early after burn injury, and its chemical blockade at injury reduced subsequent chronic bacterial susceptibility. It is unclear if mTOR plays a role in the exacerbated immune dysfunction seen after B+I injury. We aimed to: (1) characterize a novel murine model of B+I injury, and (2) investigate the role of mTOR in the immune response after B+I injury. Pulmonary and systemic immune responses to B+I were characterized in the absence or presence of mTOR inhibition at the time of injury. Data describe a murine model of B+I with inhalation-specific immune phenotypes and implicate mTOR in the acute immune dysfunction observed.  相似文献   
99.
Fast accurate fuzzy clustering through data reduction   总被引:11,自引:0,他引:11  
Clustering is a useful approach in image segmentation, data mining, and other pattern recognition problems for which unlabeled data exist. Fuzzy clustering using fuzzy c-means or variants of it can provide a data partition that is both better and more meaningful than hard clustering approaches. The clustering process can be quite slow when there are many objects or patterns to be clustered. This paper discusses the algorithm brFCM, which is able to reduce the number of distinct patterns which must be clustered without adversely affecting the partition quality. The reduction is done by aggregating similar examples and then using a weighted exemplar in the clustering process. The reduction in the amount of clustering data allows a partition of the data to be produced faster. The algorithm is applied to the problem of segmenting 32 magnetic resonance images into different tissue types and the problem of segmenting 172 infrared images into trees, grass and target. Average speed-ups of as much as 59-290 times a traditional implementation of fuzzy c-means were obtained using brFCM, while producing partitions that are equivalent to those produced by fuzzy c-means.  相似文献   
100.
Hypermedia has for some time now been proposed as an adjunct to printed material within the educational process. However, creating a highly interconnected hypermedia network is complex and time consuming, with overviews of the content and structure of the information seemingly essential in order to avoid the disorientation and cognitive overload problems often described.

This paper describes an environment designed to remove much of the burden of creating such support facilities, allowing the teacher to concentrate on the content and structure of the information presented.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号