首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   3829篇
  免费   57篇
  国内免费   1篇
电工技术   85篇
综合类   1篇
化学工业   404篇
金属工艺   20篇
机械仪表   39篇
建筑科学   48篇
矿业工程   2篇
能源动力   25篇
轻工业   107篇
水利工程   10篇
石油天然气   5篇
无线电   332篇
一般工业技术   393篇
冶金工业   1823篇
原子能技术   29篇
自动化技术   564篇
  2021年   27篇
  2020年   17篇
  2019年   28篇
  2018年   29篇
  2017年   30篇
  2016年   17篇
  2015年   26篇
  2014年   23篇
  2013年   137篇
  2012年   66篇
  2011年   82篇
  2010年   86篇
  2009年   86篇
  2008年   93篇
  2007年   71篇
  2006年   87篇
  2005年   67篇
  2004年   47篇
  2003年   74篇
  2002年   77篇
  2001年   72篇
  2000年   56篇
  1999年   106篇
  1998年   431篇
  1997年   267篇
  1996年   193篇
  1995年   150篇
  1994年   135篇
  1993年   136篇
  1992年   66篇
  1991年   54篇
  1990年   76篇
  1989年   62篇
  1988年   65篇
  1987年   48篇
  1986年   63篇
  1985年   66篇
  1984年   40篇
  1983年   49篇
  1982年   45篇
  1981年   47篇
  1980年   37篇
  1979年   27篇
  1978年   39篇
  1977年   56篇
  1976年   103篇
  1975年   24篇
  1974年   30篇
  1972年   20篇
  1970年   17篇
排序方式: 共有3887条查询结果,搜索用时 31 毫秒
101.
Much remains to be understood about how low socioeconomic status (SES) increases cardiovascular disease and mortality risk. Data from the Kuopio Ischemic Heart Disease Risk Factor Study (1984-1993) were used to estimate the associations between acute myocardial infarction and income, all-cause mortality, and cardiovascular mortality in a population-based sample of 2,272 Finnish men, with adjustment for 23 biologic, behavioral, psychologic, and social risk factors. Compared with the highest income quintile, those in the bottom quintile had age-adjusted relative hazards of 3.14 (95% confidence interval (CI) 1.77-5.56), 2.66 (95% CI 1.25-5.66), and 4.34 (95% CI 1.95-9.66) for all-cause mortality, cardiovascular mortality, and AMI, respectively. After adjustment for risk factors, the relative hazards for the same comparisons were 1.32 (95% CI 0.70-2.49), 0.70 (95% CI 0.29-1.69), and 2.83 (95% CI 1.14-7.00). In the lowest income quintile, adjustment for risk factors reduced the excess relative risk of all-cause mortality by 85%, that of cardiovascular mortality by 118%, and that of acute myocardial infarction by 45%. These data show how the association between SES and cardiovascular mortality and all-cause mortality is mediated by known risk factor pathways, but full "explanations" for these associations will need to encompass why these biologic, behavioral, psychologic, and social risk factors are differentially distributed by SES.  相似文献   
102.
Three experiments examined how norms characteristic of a "culture of honor" manifest themselves in the cognitions, emotions, behaviors, and physiological reactions of southern White males. Participants were University of Michigan students who grew up in the North or South. In 3 experiments, they were insulted by a confederate who bumped into the participant and called him an "asshole." Compared with northerners (who were relatively unaffected by the insult) southerners were more likely to think their masculine reputation was threatened, more upset (as shown by a rise in cortisol levels), more physiologically primed for aggression (as shown by a rise in testosterone levels), more cognitively primed for aggression, and more likely to engage in aggressive and dominant behavior. Findings highlight the insult–aggression cycle in cultures of honor, in which insults diminish a man's reputation and he tries to restore his status by aggressive or violent behavior. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   
103.
EPW (Electron–Phonon coupling using Wannier functions) is a program written in Fortran90 for calculating the electron–phonon coupling in periodic systems using density-functional perturbation theory and maximally localized Wannier functions. EPW can calculate electron–phonon interaction self-energies, electron–phonon spectral functions, and total as well as mode-resolved electron–phonon coupling strengths. The calculation of the electron–phonon coupling requires a very accurate sampling of electron–phonon scattering processes throughout the Brillouin zone, hence reliable calculations can be prohibitively time-consuming. EPW combines the Kohn–Sham electronic eigenstates and the vibrational eigenmodes provided by the Quantum ESPRESSO package (see Giannozzi et al., 2009 [1]) with the maximally localized Wannier functions provided by the wannier90 package (see Mostofi et al., 2008 [2]) in order to generate electron–phonon matrix elements on arbitrarily dense Brillouin zone grids using a generalized Fourier interpolation. This feature of EPW leads to fast and accurate calculations of the electron–phonon coupling, and enables the study of the electron–phonon coupling in large and complex systems.

Program summary

Program title: EPWCatalogue identifier: AEHA_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHA_v1_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: GNU Public LicenseNo. of lines in distributed program, including test data, etc.: 304 443No. of bytes in distributed program, including test data, etc.: 1 487 466Distribution format: tar.gzProgramming language: Fortran 90Computer: Any architecture with a Fortran 90 compilerOperating system: Any environment with a Fortran 90 compilerHas the code been vectorized or parallelized?: Yes, optimized for 1 to 64 processorsRAM: Heavily system dependent, as small as a few MBSupplementary material: A copy of the “EPW/examples” directory containing the phonon binary files can be downloadedClassification: 7External routines: MPI, Quantum-ESPRESSO package [1], BLAS, LAPACK, FFTW. (The necessary Blas, Lapack and FFTW routines are included in the Quantum-ESPRESSO package [1].)Nature of problem: The calculation of the electron–phonon coupling from first-principles requires a very accurate sampling of electron–phonon scattering processes throughout the Brillouin zone; hence reliable calculations can be prohibitively timeconsuming.Solution method: EPW makes use of a real-space formulation and combines the Kohn–Sham electronic eigenstates and the vibrational eigenmodes provided by the Quantum-ESPRESSO package with the maximally localized Wannier functions provided by the wannier90 package in order to generate electron–phonon matrix elements on arbitrarily dense Brillouin zone grids using a generalized Fourier interpolation.Running time: Single processor examples typically take 5–10 minutes.References:
  • [1] 
    P. Giannozzi, et al., J. Phys. Condens. Matter 21 (2009), 395502, http://www.quantum-espresso.org/.
  相似文献   
104.
Combinatorial interaction testing (CIT) is a cost-effective sampling technique for discovering interaction faults in highly-configurable systems. Constrained CIT extends the technique to situations where some features cannot coexist in a configuration, and is therefore more applicable to real-world software. Recent work on greedy algorithms to build CIT samples now efficiently supports these feature constraints. But when testing a single system configuration is expensive, greedy techniques perform worse than meta-heuristic algorithms, because greedy algorithms generally need larger samples to exercise the same set of interactions. On the other hand, current meta-heuristic algorithms have long run times when feature constraints are present. Neither class of algorithm is suitable when both constraints and the cost of testing configurations are important factors. Therefore, we reformulate one meta-heuristic search algorithm for constructing CIT samples, simulated annealing, to more efficiently incorporate constraints. We identify a set of algorithmic changes and experiment with our modifications on 35 realistic constrained problems and on a set of unconstrained problems from the literature to isolate the factors that improve performance. Our evaluation determines that the optimizations reduce run time by a factor of 90 and accomplish the same coverage objectives with even fewer system configurations. Furthermore, the new version compares favorably with greedy algorithms on real-world problems, and, though our modifications were aimed at constrained problems, it shows similar advantages when feature constraints are absent.  相似文献   
105.
Many times, even if a crowd simulation looks good in general, there could be some specific individual behaviors which do not seem correct. Spotting such problems manually can become tedious, but ignoring them may harm the simulation's credibility. In this paper we present a data‐driven approach for evaluating the behaviors of individuals within a simulated crowd. Based on video‐footage of a real crowd, a database of behavior examples is generated. Given a simulation of a crowd, an analog analysis is performed on it, defining a set of queries, which are matched by a similarity function to the database examples. The results offer a possible objective answer to the question of how similar are the simulated individual behaviors to real observed behaviors. Moreover, by changing the video input one can change the context of evaluation. We show several examples of evaluating simulated crowds produced using different techniques and comprising of dense crowds, sparse crowds and flocks.  相似文献   
106.
In this paper we present new edge detection algorithms which are motivated by recent developments on edge-adapted reconstruction techniques [F. Aràndiga, A. Cohen, R. Donat, N. Dyn, B. Matei, Approximation of piecewise smooth functions and images by edge-adapted (ENO-EA) nonlinear multiresolution techniques, Appl. Comput. Harmon. Anal. 24 (2) (2008) 225–250]. They are based on comparing local quantities rather than on filtering and thresholding. This comparison process is invariant under certain transformations that model light changes in the image, hence we obtain edge detection algorithms which are insensitive to changes in illumination.  相似文献   
107.
We propose and study quantitative measures of smoothness f ? A(f) which are adapted to anisotropic features such as edges in images or shocks in PDE’s. These quantities govern the rate of approximation by adaptive finite elements, when no constraint is imposed on the aspect ratio of the triangles, the simplest example being \(A_{p}(f)=\|\sqrt{|\mathrm{det}(d^{2}f)|}\|_{L^{\tau}}\) which appears when approximating in the L p norm by piecewise linear elements when \(\frac{1}{\tau}=\frac{1}{p}+1\). The quantities A(f) are not semi-norms, and therefore cannot be used to define linear function spaces. We show that these quantities can be well defined by mollification when f has jump discontinuities along piecewise smooth curves. This motivates for using them in image processing as an alternative to the frequently used total variation semi-norm which does not account for the smoothness of the edges.  相似文献   
108.
This paper deals with compact label-based representations for trees. Consider an n-node undirected connected graph G with a predefined numbering on the ports of each node. The all-ports tree labeling ℒ all gives each node v of G a label containing the port numbers of all the tree edges incident to v. The upward tree labeling ℒ up labels each node v by the number of the port leading from v to its parent in the tree. Our measure of interest is the worst case and total length of the labels used by the scheme, denoted M up (T) and S up (T) for ℒ up and M all (T) and S all (T) for ℒ all . The problem studied in this paper is the following: Given a graph G and a predefined port labeling for it, with the ports of each node v numbered by 0,…,deg (v)−1, select a rooted spanning tree for G minimizing (one of) these measures. We show that the problem is polynomial for M up (T), S up (T) and S all (T) but NP-hard for M all (T) (even for 3-regular planar graphs). We show that for every graph G and port labeling there exists a spanning tree T for which S up (T)=O(nlog log n). We give a tight bound of O(n) in the cases of complete graphs with arbitrary labeling and arbitrary graphs with symmetric port labeling. We conclude by discussing some applications for our tree representation schemes. A preliminary version of this paper has appeared in the proceedings of the 7th International Workshop on Distributed Computing (IWDC), Kharagpur, India, December 27–30, 2005, as part of Cohen, R. et al.: Labeling schemes for tree representation. In: Proceedings of 7th International Workshop on Distributed Computing (IWDC), Lecture Notes of Computer Science, vol. 3741, pp. 13–24 (2005). R. Cohen supported by the Pacific Theaters Foundation. P. Fraigniaud and D. Ilcinkas supported by the project “PairAPair” of the ACI Masses de Données, the project “Fragile” of the ACI Sécurité et Informatique, and by the project “Grand Large” of INRIA. A. Korman supported in part by an Aly Kaufman fellowship. D. Peleg supported in part by a grant from the Israel Science Foundation.  相似文献   
109.
In this paper, we present a new method for segmenting closed contours and surfaces. Our work builds on a variant of the minimal path approach. First, an initial point on the desired contour is chosen by the user. Next, new keypoints are detected automatically using a front propagation approach. We assume that the desired object has a closed boundary. This a-priori knowledge on the topology is used to devise a relevant criterion for stopping the keypoint detection and front propagation. The final domain visited by the front will yield a band surrounding the object of interest. Linking pairs of neighboring keypoints with minimal paths allows us to extract a closed contour from a 2D image. This approach can also be used for finding an open curve giving extra information as stopping criteria. Detection of a variety of objects on real images is demonstrated. Using a similar idea, we can extract networks of minimal paths from a 3D image called Geodesic Meshing. The proposed method is applied to 3D data with promising results.
Laurent D. CohenEmail:
  相似文献   
110.
Rapid building detection using machine learning   总被引:1,自引:0,他引:1  
This work describes algorithms for performing discrete object detection, specifically in the case of buildings, where usually only low quality RGB-only geospatial reflective imagery is available. We utilize new candidate search and feature extraction techniques to reduce the problem to a machine learning (ML) classification task. Here we can harness the complex patterns of contrast features contained in training data to establish a model of buildings. We avoid costly sliding windows to generate candidates; instead we innovatively stitch together well known image processing techniques to produce candidates for building detection that cover 80–85 % of buildings. Reducing the number of possible candidates is important due to the scale of the problem. Each candidate is subjected to classification which, although linear, costs time and prohibits large scale evaluation. We propose a candidate alignment algorithm to boost classification performance to 80–90 % precision with a linear time algorithm and show it has negligible cost. Also, we propose a new concept called a Permutable Haar Mesh (PHM) which we use to form and traverse a search space to recover candidate buildings which were lost in the initial preprocessing phase. All code and datasets from this paper are made available online (http://kdl.cs.umb.edu/w/datasets/ and https://github.com/caitlinkuhlman/ObjectDetectionCLUtility).  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号