首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   711076篇
  免费   9332篇
  国内免费   2107篇
电工技术   13188篇
综合类   832篇
化学工业   105584篇
金属工艺   27046篇
机械仪表   20967篇
建筑科学   17267篇
矿业工程   3102篇
能源动力   19109篇
轻工业   61738篇
水利工程   7027篇
石油天然气   12026篇
武器工业   100篇
无线电   83772篇
一般工业技术   136055篇
冶金工业   139343篇
原子能技术   14084篇
自动化技术   61275篇
  2021年   6217篇
  2020年   4565篇
  2019年   5912篇
  2018年   10045篇
  2017年   9906篇
  2016年   10381篇
  2015年   7119篇
  2014年   11881篇
  2013年   32368篇
  2012年   18891篇
  2011年   26024篇
  2010年   20515篇
  2009年   23018篇
  2008年   23929篇
  2007年   23550篇
  2006年   20755篇
  2005年   19049篇
  2004年   18305篇
  2003年   18035篇
  2002年   17122篇
  2001年   17102篇
  2000年   16116篇
  1999年   16957篇
  1998年   43150篇
  1997年   30305篇
  1996年   23343篇
  1995年   17631篇
  1994年   15436篇
  1993年   15148篇
  1992年   11189篇
  1991年   10416篇
  1990年   10177篇
  1989年   9760篇
  1988年   9152篇
  1987年   8140篇
  1986年   8016篇
  1985年   9023篇
  1984年   8469篇
  1983年   7551篇
  1982年   7124篇
  1981年   7264篇
  1980年   6862篇
  1979年   6690篇
  1978年   6353篇
  1977年   7664篇
  1976年   9848篇
  1975年   5634篇
  1974年   5367篇
  1973年   5503篇
  1972年   4448篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
171.
Photonic networks based on the optical path concept and wavelength division multiplexing (WDM) technology require unique operation, administration, and maintenance (OAM) functions. In order to realize the required OAM functions, the optical path network must support an effective management information transfer method. The method that superimposes a pilot tone on the optical signal appears very interesting for optical path overhead transfer. The pilot tone transmission capacity is determined by the carrier to noise ratio which depends on the power spectral density of the optical signal. The pilot tone transmission capacity of an optical path network employing WDM technology is elucidated; 4.5 kb/s transmission can be realized when the pilot tone modulation index is set at 3%  相似文献   
172.
173.
In order to observe the transport ability of peritoneum to small molecular substances, peritoneal equilibration test (PET) was performed in 52 CAPD patients. By analysing the relationship between peritoneal transport function and dialysis adequacy, we found the average urea KT/V and Cr were significantly lower in high and low transport groups (n = 6 and n = 2) than in high average and low average groups (n = 35 and n = 9). According to the results of PET, we adjusted the dialysis program of 11 patients and the dialysis adequacy was markedly improved. We concluded that PET was helpful for selecting and adjusting CAPD program, and discussed some questions which should be payed more attention in PET operation.  相似文献   
174.
A premise of cardiac risk stratification is that the added risk of coronary artery bypass grafting (CABG) is offset by the improved safety of subsequent vascular reconstruction (VR). We questioned if elective CABG is patients with severe peripheral vascular disease (PVD) is a relatively high-risk procedure. A cohort study of 680 elective CABG patients from January 1993 to December 1994 was performed using three mutually exclusive outcomes of complication-free survival, morbidity, and mortality. Patient characteristic, operative, and outcome data were prospectively collected. Retrospective review determined that 58 patients had either a standard indication for or a history of VR. Overall CABG mortality was 2.5%, with statistically similar but relatively higher rates for PVD as compared to non-PVD patients. In contrast, major morbidity occurred at rates 3.6-fold higher in PVD patients (39.7%) than in disease-free patients (16.7%) after adjustment for the effects of patient and operative variables (odds ratio [OR] 3.67, 95% confidence interval [CI] 1.93-6.99). CABG morbidity in the PVD patient was most likely in those patients with aortoiliac (OR 9.51, CI 3.20-28.27) and aortic aneurysmal (OR 5.24, CI 1.28-21.41) disease types. CABG in PVD patients is associated with significant major morbidity. Such morbidity may preclude or alter the timing of subsequent VR.  相似文献   
175.
176.
Prolog/Rex represents a powerful amalgamation of the latest techniques for knowledge representation and processing, rich in semantic features that ease the difficult task of encoding heterogeneous knowledge of real-world applications. The Prolog/Rex concept mechanism lets a user represent domain entities in terms of their structural and behavioral properties, including multiple inheritance, arbitrary user-defined relations among entities, annotated values (demons), incomplete knowledge, etc. A flexible rule language helps the knowledge engineer capture human expertise and provide flexible control of the reasoning process. Additional Prolog/Rex strength that cannot be found in any other hybrid language made on top of Prolog is language level support for keeping many potentially contradictory solutions to a problem, allowing possible solutions and their implications to be automatically generated and completely explored before they are committed. The same mechanism is used to model time-states, which are useful in planning and scheduling applications of Prolog/Rex  相似文献   
177.
It is well known that the effectiveness of relational database systems is greatly dependent on the efficiency of the data access strategies. For this reason, much work has been devoted to the development of new access techniques, supported by adequate access structures such as the B+trees. The effectiveness of the B +tree also depends on the data distribution characteristics; in particular, poor performance results when the data show strong key value distribution unbalancing. The aim of this paper is to present the partial index: a new access structure that is useful in such cases of unbalancing, as an alternative to the B+tree unclustered indexes. The access structures are built in the physical design phase, and at execution (or compilation) time, the optimizer chooses the most efficient access path. Thus, integration of the partial indexing technique in the design and in the optimization process are also described  相似文献   
178.
Efficient algorithms for processing large volumes of data are very important both for relational and new object-oriented database systems. Many query-processing operations can be implemented using sort- or hash-based algorithms, e.g. intersections, joins, and duplicate elimination. In the early relational database systems, only sort-based algorithms were employed. In the last decade, hash-based algorithms have gained acceptance and popularity, and are often considered generally superior to sort-based algorithms such as merge-join. In this article, we compare the concepts behind sort- and hash-based query-processing algorithms and conclude that (1) many dualities exist between the two types of algorithms, (2) their costs differ mostly by percentages rather than by factors, (3) several special cases exist that favor one or the other choice, and (4) there is a strong reason why both hash- and sort-based algorithms should be available in a query-processing system. Our conclusions are supported by experiments performed using the Volcano query execution engine  相似文献   
179.
We propose and evaluate a parallel “decomposite best-first” search branch-and-bound algorithm (dbs) for MIN-based multiprocessor systems. We start with a new probabilistic model to estimate the number of evaluated nodes for a serial best-first search branch-and-bound algorithm. This analysis is used in predicting the parallel algorithm speed-up. The proposed algorithm initially decomposes a problem into N subproblems, where N is the number of processors available in a multiprocessor. Afterwards, each processor executes the serial best-first search to find a local feasible solution. Local solutions are broadcasted through the network to compute the final solution. A conflict-free mapping scheme, known as the step-by-step spread, is used for subproblem distribution on the MIN. A speedup expression for the parallel algorithm is then derived using the serial best-first search node evaluation model. Our analysis considers both computation and communication overheads for providing realistic speed-up. Communication modeling is also extended for the parallel global best-first search technique. All the analytical results are validated via simulation. For large systems, when communication overhead is taken into consideration, it is observed that the parallel decomposite best-first search algorithm provides better speed-up compared to other reported schemes  相似文献   
180.
A constant-time algorithm for labeling the connected components of an N×N image on a reconfigurable network of N3 processors is presented. The main contribution of the algorithm is a novel constant-time technique for determining the minimum-labeled PE in each component. The number of processors used by the algorithm can be reduced to N/sup 2+(1/d/), for any 1⩽d⩽log N, if O(d) time is allowed  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号