首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   3949篇
  免费   57篇
  国内免费   1篇
电工技术   83篇
综合类   1篇
化学工业   417篇
金属工艺   21篇
机械仪表   39篇
建筑科学   46篇
矿业工程   2篇
能源动力   54篇
轻工业   108篇
水利工程   10篇
石油天然气   5篇
无线电   347篇
一般工业技术   408篇
冶金工业   1824篇
原子能技术   29篇
自动化技术   613篇
  2022年   18篇
  2021年   32篇
  2020年   19篇
  2019年   30篇
  2018年   30篇
  2017年   32篇
  2016年   20篇
  2015年   29篇
  2014年   28篇
  2013年   144篇
  2012年   68篇
  2011年   90篇
  2010年   96篇
  2009年   98篇
  2008年   112篇
  2007年   81篇
  2006年   91篇
  2005年   69篇
  2004年   50篇
  2003年   75篇
  2002年   78篇
  2001年   67篇
  2000年   55篇
  1999年   106篇
  1998年   432篇
  1997年   267篇
  1996年   195篇
  1995年   152篇
  1994年   136篇
  1993年   137篇
  1992年   66篇
  1991年   54篇
  1990年   76篇
  1989年   62篇
  1988年   66篇
  1987年   49篇
  1986年   65篇
  1985年   66篇
  1984年   42篇
  1983年   50篇
  1982年   45篇
  1981年   47篇
  1980年   38篇
  1979年   30篇
  1978年   38篇
  1977年   56篇
  1976年   104篇
  1975年   24篇
  1974年   30篇
  1972年   20篇
排序方式: 共有4007条查询结果,搜索用时 15 毫秒
71.

The algorithm selection problem is defined as identifying the best-performing machine learning (ML) algorithm for a given combination of dataset, task, and evaluation measure. The human expertise required to evaluate the increasing number of ML algorithms available has resulted in the need to automate the algorithm selection task. Various approaches have emerged to handle the automatic algorithm selection challenge, including meta-learning. Meta-learning is a popular approach that leverages accumulated experience for future learning and typically involves dataset characterization. Existing meta-learning methods often represent a dataset using predefined features and thus cannot be generalized across different ML tasks, or alternatively, learn a dataset’s representation in a supervised manner and therefore are unable to deal with unsupervised tasks. In this study, we propose a novel learning-based task-agnostic method for producing dataset representations. Then, we introduce TRIO, a meta-learning approach, that utilizes the proposed dataset representations to accurately recommend top-performing algorithms for previously unseen datasets. TRIO first learns graphical representations for the datasets, using four tools to learn the latent interactions among dataset instances and then utilizes a graph convolutional neural network technique to extract embedding representations from the graphs obtained. We extensively evaluate the effectiveness of our approach on 337 datasets and 195 ML algorithms, demonstrating that TRIO significantly outperforms state-of-the-art methods for algorithm selection for both supervised (classification and regression) and unsupervised (clustering) tasks.

  相似文献   
72.
Decades of practice and research suggest that nurse practitioners (NPs) provide cost-effective and high-quality care. Managed care's emphasis on prevention and cost savings led some policy makers to view NPs as a way to meet the need for primary care providers. However, access to and utilization of NPs has increasingly been controlled by managed care organizations (MCOs) through their selection of providers for primary care panels. This study employed qualitative methodology to examine NPs' experiences with MCOs. Three focus groups, comprising 27 NPs in New York and Connecticut, revealed NPs' mixed reactions to managed care and a range of sentiments regarding NPs' efforts to be listed as primary care providers. The results reflected NPs' concerns about their perceived "invisibility," as well as their sense of "invincibility" in the ways in which NPs are responding to the barriers posed by MCOs. They identified barriers to, as well as ways to facilitate, being listed by MCOs, and described the importance of NPs working individually and collectively in negotiating with MCOs.  相似文献   
73.
BACKGROUND: Like other areas of health care, critical care faces increasing pressure to improve the quality while reducing the cost of care. Strategies drawn from the literature and the authors' experiences are presented. STRATEGIES AND OPPORTUNITIES FOR IMPROVEMENTS: Ten process- or structure-related areas are targeted as strategically important focuses of improvement: (1) restructuring administrative lines to better suit key processes; (2) physician leadership in critical care units; (3) management training for critical care managers; (4) triage; (5) multidisciplinary critical care; (6) standardization of care; (7) developing alternatives to critical care units; (8) timeliness of care delivery; (9) appropriate use of critical care resources; and (10) tracking quality improvement. TIMELINESS OF CARE DELIVERY: Whatever the root cause(s) of unnecessary delays, the result is inefficient use of critical care resources-and ultimately either a need for more resources or longer wait times. Innovations designed to reduce wait times and waste, such as the establishment of a microchemistry stat laboratory, may prove valuable. APPROPRIATE USE OF CRITICAL CARE RESOURCES: Possible strategies for the appropriate use of critical care resources include better selection of well-informed patients who undergo procedures. Reduction in variation among physicians and organizations in providing therapies will also likely lead to a reduction in some high-risk procedures offering little or no benefit, and therefore a reduction in need for critical care services. Better preparation of patients and families should also make end-of-life decisions easier when questions of "futility" arise. Better information on outcomes and cost-effectiveness and consensus on withdrawal of critical care treatments represent two additional strategies.  相似文献   
74.
Much remains to be understood about how low socioeconomic status (SES) increases cardiovascular disease and mortality risk. Data from the Kuopio Ischemic Heart Disease Risk Factor Study (1984-1993) were used to estimate the associations between acute myocardial infarction and income, all-cause mortality, and cardiovascular mortality in a population-based sample of 2,272 Finnish men, with adjustment for 23 biologic, behavioral, psychologic, and social risk factors. Compared with the highest income quintile, those in the bottom quintile had age-adjusted relative hazards of 3.14 (95% confidence interval (CI) 1.77-5.56), 2.66 (95% CI 1.25-5.66), and 4.34 (95% CI 1.95-9.66) for all-cause mortality, cardiovascular mortality, and AMI, respectively. After adjustment for risk factors, the relative hazards for the same comparisons were 1.32 (95% CI 0.70-2.49), 0.70 (95% CI 0.29-1.69), and 2.83 (95% CI 1.14-7.00). In the lowest income quintile, adjustment for risk factors reduced the excess relative risk of all-cause mortality by 85%, that of cardiovascular mortality by 118%, and that of acute myocardial infarction by 45%. These data show how the association between SES and cardiovascular mortality and all-cause mortality is mediated by known risk factor pathways, but full "explanations" for these associations will need to encompass why these biologic, behavioral, psychologic, and social risk factors are differentially distributed by SES.  相似文献   
75.
Combinatorial interaction testing (CIT) is a cost-effective sampling technique for discovering interaction faults in highly-configurable systems. Constrained CIT extends the technique to situations where some features cannot coexist in a configuration, and is therefore more applicable to real-world software. Recent work on greedy algorithms to build CIT samples now efficiently supports these feature constraints. But when testing a single system configuration is expensive, greedy techniques perform worse than meta-heuristic algorithms, because greedy algorithms generally need larger samples to exercise the same set of interactions. On the other hand, current meta-heuristic algorithms have long run times when feature constraints are present. Neither class of algorithm is suitable when both constraints and the cost of testing configurations are important factors. Therefore, we reformulate one meta-heuristic search algorithm for constructing CIT samples, simulated annealing, to more efficiently incorporate constraints. We identify a set of algorithmic changes and experiment with our modifications on 35 realistic constrained problems and on a set of unconstrained problems from the literature to isolate the factors that improve performance. Our evaluation determines that the optimizations reduce run time by a factor of 90 and accomplish the same coverage objectives with even fewer system configurations. Furthermore, the new version compares favorably with greedy algorithms on real-world problems, and, though our modifications were aimed at constrained problems, it shows similar advantages when feature constraints are absent.  相似文献   
76.
Many times, even if a crowd simulation looks good in general, there could be some specific individual behaviors which do not seem correct. Spotting such problems manually can become tedious, but ignoring them may harm the simulation's credibility. In this paper we present a data‐driven approach for evaluating the behaviors of individuals within a simulated crowd. Based on video‐footage of a real crowd, a database of behavior examples is generated. Given a simulation of a crowd, an analog analysis is performed on it, defining a set of queries, which are matched by a similarity function to the database examples. The results offer a possible objective answer to the question of how similar are the simulated individual behaviors to real observed behaviors. Moreover, by changing the video input one can change the context of evaluation. We show several examples of evaluating simulated crowds produced using different techniques and comprising of dense crowds, sparse crowds and flocks.  相似文献   
77.
In this paper we present new edge detection algorithms which are motivated by recent developments on edge-adapted reconstruction techniques [F. Aràndiga, A. Cohen, R. Donat, N. Dyn, B. Matei, Approximation of piecewise smooth functions and images by edge-adapted (ENO-EA) nonlinear multiresolution techniques, Appl. Comput. Harmon. Anal. 24 (2) (2008) 225–250]. They are based on comparing local quantities rather than on filtering and thresholding. This comparison process is invariant under certain transformations that model light changes in the image, hence we obtain edge detection algorithms which are insensitive to changes in illumination.  相似文献   
78.
We propose and study quantitative measures of smoothness f ? A(f) which are adapted to anisotropic features such as edges in images or shocks in PDE’s. These quantities govern the rate of approximation by adaptive finite elements, when no constraint is imposed on the aspect ratio of the triangles, the simplest example being \(A_{p}(f)=\|\sqrt{|\mathrm{det}(d^{2}f)|}\|_{L^{\tau}}\) which appears when approximating in the L p norm by piecewise linear elements when \(\frac{1}{\tau}=\frac{1}{p}+1\). The quantities A(f) are not semi-norms, and therefore cannot be used to define linear function spaces. We show that these quantities can be well defined by mollification when f has jump discontinuities along piecewise smooth curves. This motivates for using them in image processing as an alternative to the frequently used total variation semi-norm which does not account for the smoothness of the edges.  相似文献   
79.
This paper deals with compact label-based representations for trees. Consider an n-node undirected connected graph G with a predefined numbering on the ports of each node. The all-ports tree labeling ℒ all gives each node v of G a label containing the port numbers of all the tree edges incident to v. The upward tree labeling ℒ up labels each node v by the number of the port leading from v to its parent in the tree. Our measure of interest is the worst case and total length of the labels used by the scheme, denoted M up (T) and S up (T) for ℒ up and M all (T) and S all (T) for ℒ all . The problem studied in this paper is the following: Given a graph G and a predefined port labeling for it, with the ports of each node v numbered by 0,…,deg (v)−1, select a rooted spanning tree for G minimizing (one of) these measures. We show that the problem is polynomial for M up (T), S up (T) and S all (T) but NP-hard for M all (T) (even for 3-regular planar graphs). We show that for every graph G and port labeling there exists a spanning tree T for which S up (T)=O(nlog log n). We give a tight bound of O(n) in the cases of complete graphs with arbitrary labeling and arbitrary graphs with symmetric port labeling. We conclude by discussing some applications for our tree representation schemes. A preliminary version of this paper has appeared in the proceedings of the 7th International Workshop on Distributed Computing (IWDC), Kharagpur, India, December 27–30, 2005, as part of Cohen, R. et al.: Labeling schemes for tree representation. In: Proceedings of 7th International Workshop on Distributed Computing (IWDC), Lecture Notes of Computer Science, vol. 3741, pp. 13–24 (2005). R. Cohen supported by the Pacific Theaters Foundation. P. Fraigniaud and D. Ilcinkas supported by the project “PairAPair” of the ACI Masses de Données, the project “Fragile” of the ACI Sécurité et Informatique, and by the project “Grand Large” of INRIA. A. Korman supported in part by an Aly Kaufman fellowship. D. Peleg supported in part by a grant from the Israel Science Foundation.  相似文献   
80.
In this paper, we present a new method for segmenting closed contours and surfaces. Our work builds on a variant of the minimal path approach. First, an initial point on the desired contour is chosen by the user. Next, new keypoints are detected automatically using a front propagation approach. We assume that the desired object has a closed boundary. This a-priori knowledge on the topology is used to devise a relevant criterion for stopping the keypoint detection and front propagation. The final domain visited by the front will yield a band surrounding the object of interest. Linking pairs of neighboring keypoints with minimal paths allows us to extract a closed contour from a 2D image. This approach can also be used for finding an open curve giving extra information as stopping criteria. Detection of a variety of objects on real images is demonstrated. Using a similar idea, we can extract networks of minimal paths from a 3D image called Geodesic Meshing. The proposed method is applied to 3D data with promising results.
Laurent D. CohenEmail:
  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号