首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   4345篇
  免费   51篇
  国内免费   1篇
电工技术   84篇
综合类   1篇
化学工业   403篇
金属工艺   20篇
机械仪表   37篇
建筑科学   45篇
矿业工程   2篇
能源动力   25篇
轻工业   110篇
水利工程   9篇
石油天然气   5篇
无线电   326篇
一般工业技术   387篇
冶金工业   2354篇
原子能技术   29篇
自动化技术   560篇
  2021年   26篇
  2020年   17篇
  2019年   26篇
  2018年   26篇
  2017年   30篇
  2016年   18篇
  2015年   27篇
  2014年   22篇
  2013年   133篇
  2012年   63篇
  2011年   78篇
  2010年   84篇
  2009年   85篇
  2008年   93篇
  2007年   72篇
  2006年   81篇
  2005年   68篇
  2004年   49篇
  2003年   75篇
  2002年   77篇
  2001年   68篇
  2000年   56篇
  1999年   123篇
  1998年   575篇
  1997年   380篇
  1996年   253篇
  1995年   193篇
  1994年   167篇
  1993年   161篇
  1992年   69篇
  1991年   60篇
  1990年   81篇
  1989年   66篇
  1988年   69篇
  1987年   53篇
  1986年   65篇
  1985年   68篇
  1984年   40篇
  1983年   51篇
  1982年   50篇
  1981年   49篇
  1980年   41篇
  1979年   27篇
  1978年   39篇
  1977年   81篇
  1976年   133篇
  1975年   27篇
  1974年   30篇
  1972年   20篇
  1970年   17篇
排序方式: 共有4397条查询结果,搜索用时 11 毫秒
71.
Trisomy 21 develops as a result of nondisjunction of two homologous chromosomes during either the first or second meiotic division. One of the more important consequences of these genetic alterations is the predictable, although variable disturbance in the architecture of the craniofacial region [1]. Postnatal craniofacial morphology has been extensively studied in Down's syndrome (DS). However, little information is available on human prenatal development of the head and face in such patients. The time at which changes in craniofacial phenotype first emerge in Down's syndrome fetuses and at which physical growth begins to diverge from normal is unknown. To explore these questions, we compared prenatal craniofacial growth in 50 Down's syndrome fetuses with that of 555 fetuses judged to be "typical for body weight and age" using the method of log-linear allometry [2].  相似文献   
72.
Decades of practice and research suggest that nurse practitioners (NPs) provide cost-effective and high-quality care. Managed care's emphasis on prevention and cost savings led some policy makers to view NPs as a way to meet the need for primary care providers. However, access to and utilization of NPs has increasingly been controlled by managed care organizations (MCOs) through their selection of providers for primary care panels. This study employed qualitative methodology to examine NPs' experiences with MCOs. Three focus groups, comprising 27 NPs in New York and Connecticut, revealed NPs' mixed reactions to managed care and a range of sentiments regarding NPs' efforts to be listed as primary care providers. The results reflected NPs' concerns about their perceived "invisibility," as well as their sense of "invincibility" in the ways in which NPs are responding to the barriers posed by MCOs. They identified barriers to, as well as ways to facilitate, being listed by MCOs, and described the importance of NPs working individually and collectively in negotiating with MCOs.  相似文献   
73.
Traditionally, the term "precision" denotes the quality of work, uniting the correctness and reproducibility of results. However, it can have another sense: the minimal difference between the specimens, detectable by this or that device or method. In order to rule out misunderstanding, we propose to use the term "measuring precision", if the term is used in its second meaning. It is easy to assess the measuring precision by statistical processing of large scope of data: if the histogram shows waves which disappear after the histogram step is increased, this means that the device and method used in this study permit biochemical analysis only at measuring precision corresponding to the histogram step at which the waves disappear. A table of measuring precision values is offered for 19 methods realized using the Labsystems FP-900 device (Finland) In many cases it is obviously low.  相似文献   
74.
BACKGROUND: Like other areas of health care, critical care faces increasing pressure to improve the quality while reducing the cost of care. Strategies drawn from the literature and the authors' experiences are presented. STRATEGIES AND OPPORTUNITIES FOR IMPROVEMENTS: Ten process- or structure-related areas are targeted as strategically important focuses of improvement: (1) restructuring administrative lines to better suit key processes; (2) physician leadership in critical care units; (3) management training for critical care managers; (4) triage; (5) multidisciplinary critical care; (6) standardization of care; (7) developing alternatives to critical care units; (8) timeliness of care delivery; (9) appropriate use of critical care resources; and (10) tracking quality improvement. TIMELINESS OF CARE DELIVERY: Whatever the root cause(s) of unnecessary delays, the result is inefficient use of critical care resources-and ultimately either a need for more resources or longer wait times. Innovations designed to reduce wait times and waste, such as the establishment of a microchemistry stat laboratory, may prove valuable. APPROPRIATE USE OF CRITICAL CARE RESOURCES: Possible strategies for the appropriate use of critical care resources include better selection of well-informed patients who undergo procedures. Reduction in variation among physicians and organizations in providing therapies will also likely lead to a reduction in some high-risk procedures offering little or no benefit, and therefore a reduction in need for critical care services. Better preparation of patients and families should also make end-of-life decisions easier when questions of "futility" arise. Better information on outcomes and cost-effectiveness and consensus on withdrawal of critical care treatments represent two additional strategies.  相似文献   
75.
Much remains to be understood about how low socioeconomic status (SES) increases cardiovascular disease and mortality risk. Data from the Kuopio Ischemic Heart Disease Risk Factor Study (1984-1993) were used to estimate the associations between acute myocardial infarction and income, all-cause mortality, and cardiovascular mortality in a population-based sample of 2,272 Finnish men, with adjustment for 23 biologic, behavioral, psychologic, and social risk factors. Compared with the highest income quintile, those in the bottom quintile had age-adjusted relative hazards of 3.14 (95% confidence interval (CI) 1.77-5.56), 2.66 (95% CI 1.25-5.66), and 4.34 (95% CI 1.95-9.66) for all-cause mortality, cardiovascular mortality, and AMI, respectively. After adjustment for risk factors, the relative hazards for the same comparisons were 1.32 (95% CI 0.70-2.49), 0.70 (95% CI 0.29-1.69), and 2.83 (95% CI 1.14-7.00). In the lowest income quintile, adjustment for risk factors reduced the excess relative risk of all-cause mortality by 85%, that of cardiovascular mortality by 118%, and that of acute myocardial infarction by 45%. These data show how the association between SES and cardiovascular mortality and all-cause mortality is mediated by known risk factor pathways, but full "explanations" for these associations will need to encompass why these biologic, behavioral, psychologic, and social risk factors are differentially distributed by SES.  相似文献   
76.
Combinatorial interaction testing (CIT) is a cost-effective sampling technique for discovering interaction faults in highly-configurable systems. Constrained CIT extends the technique to situations where some features cannot coexist in a configuration, and is therefore more applicable to real-world software. Recent work on greedy algorithms to build CIT samples now efficiently supports these feature constraints. But when testing a single system configuration is expensive, greedy techniques perform worse than meta-heuristic algorithms, because greedy algorithms generally need larger samples to exercise the same set of interactions. On the other hand, current meta-heuristic algorithms have long run times when feature constraints are present. Neither class of algorithm is suitable when both constraints and the cost of testing configurations are important factors. Therefore, we reformulate one meta-heuristic search algorithm for constructing CIT samples, simulated annealing, to more efficiently incorporate constraints. We identify a set of algorithmic changes and experiment with our modifications on 35 realistic constrained problems and on a set of unconstrained problems from the literature to isolate the factors that improve performance. Our evaluation determines that the optimizations reduce run time by a factor of 90 and accomplish the same coverage objectives with even fewer system configurations. Furthermore, the new version compares favorably with greedy algorithms on real-world problems, and, though our modifications were aimed at constrained problems, it shows similar advantages when feature constraints are absent.  相似文献   
77.
Many times, even if a crowd simulation looks good in general, there could be some specific individual behaviors which do not seem correct. Spotting such problems manually can become tedious, but ignoring them may harm the simulation's credibility. In this paper we present a data‐driven approach for evaluating the behaviors of individuals within a simulated crowd. Based on video‐footage of a real crowd, a database of behavior examples is generated. Given a simulation of a crowd, an analog analysis is performed on it, defining a set of queries, which are matched by a similarity function to the database examples. The results offer a possible objective answer to the question of how similar are the simulated individual behaviors to real observed behaviors. Moreover, by changing the video input one can change the context of evaluation. We show several examples of evaluating simulated crowds produced using different techniques and comprising of dense crowds, sparse crowds and flocks.  相似文献   
78.
In this paper we present new edge detection algorithms which are motivated by recent developments on edge-adapted reconstruction techniques [F. Aràndiga, A. Cohen, R. Donat, N. Dyn, B. Matei, Approximation of piecewise smooth functions and images by edge-adapted (ENO-EA) nonlinear multiresolution techniques, Appl. Comput. Harmon. Anal. 24 (2) (2008) 225–250]. They are based on comparing local quantities rather than on filtering and thresholding. This comparison process is invariant under certain transformations that model light changes in the image, hence we obtain edge detection algorithms which are insensitive to changes in illumination.  相似文献   
79.
We propose and study quantitative measures of smoothness f ? A(f) which are adapted to anisotropic features such as edges in images or shocks in PDE’s. These quantities govern the rate of approximation by adaptive finite elements, when no constraint is imposed on the aspect ratio of the triangles, the simplest example being \(A_{p}(f)=\|\sqrt{|\mathrm{det}(d^{2}f)|}\|_{L^{\tau}}\) which appears when approximating in the L p norm by piecewise linear elements when \(\frac{1}{\tau}=\frac{1}{p}+1\). The quantities A(f) are not semi-norms, and therefore cannot be used to define linear function spaces. We show that these quantities can be well defined by mollification when f has jump discontinuities along piecewise smooth curves. This motivates for using them in image processing as an alternative to the frequently used total variation semi-norm which does not account for the smoothness of the edges.  相似文献   
80.
This paper deals with compact label-based representations for trees. Consider an n-node undirected connected graph G with a predefined numbering on the ports of each node. The all-ports tree labeling ℒ all gives each node v of G a label containing the port numbers of all the tree edges incident to v. The upward tree labeling ℒ up labels each node v by the number of the port leading from v to its parent in the tree. Our measure of interest is the worst case and total length of the labels used by the scheme, denoted M up (T) and S up (T) for ℒ up and M all (T) and S all (T) for ℒ all . The problem studied in this paper is the following: Given a graph G and a predefined port labeling for it, with the ports of each node v numbered by 0,…,deg (v)−1, select a rooted spanning tree for G minimizing (one of) these measures. We show that the problem is polynomial for M up (T), S up (T) and S all (T) but NP-hard for M all (T) (even for 3-regular planar graphs). We show that for every graph G and port labeling there exists a spanning tree T for which S up (T)=O(nlog log n). We give a tight bound of O(n) in the cases of complete graphs with arbitrary labeling and arbitrary graphs with symmetric port labeling. We conclude by discussing some applications for our tree representation schemes. A preliminary version of this paper has appeared in the proceedings of the 7th International Workshop on Distributed Computing (IWDC), Kharagpur, India, December 27–30, 2005, as part of Cohen, R. et al.: Labeling schemes for tree representation. In: Proceedings of 7th International Workshop on Distributed Computing (IWDC), Lecture Notes of Computer Science, vol. 3741, pp. 13–24 (2005). R. Cohen supported by the Pacific Theaters Foundation. P. Fraigniaud and D. Ilcinkas supported by the project “PairAPair” of the ACI Masses de Données, the project “Fragile” of the ACI Sécurité et Informatique, and by the project “Grand Large” of INRIA. A. Korman supported in part by an Aly Kaufman fellowship. D. Peleg supported in part by a grant from the Israel Science Foundation.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号