首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   5953篇
  免费   88篇
  国内免费   2篇
电工技术   74篇
综合类   2篇
化学工业   859篇
金属工艺   98篇
机械仪表   68篇
建筑科学   318篇
矿业工程   41篇
能源动力   165篇
轻工业   608篇
水利工程   53篇
石油天然气   36篇
武器工业   3篇
无线电   589篇
一般工业技术   871篇
冶金工业   1061篇
原子能技术   55篇
自动化技术   1142篇
  2024年   59篇
  2023年   48篇
  2022年   45篇
  2021年   129篇
  2020年   99篇
  2019年   138篇
  2018年   160篇
  2017年   134篇
  2016年   183篇
  2015年   137篇
  2014年   183篇
  2013年   352篇
  2012年   253篇
  2011年   313篇
  2010年   217篇
  2009年   217篇
  2008年   274篇
  2007年   237篇
  2006年   215篇
  2005年   175篇
  2004年   159篇
  2003年   136篇
  2002年   158篇
  2001年   77篇
  2000年   87篇
  1999年   100篇
  1998年   251篇
  1997年   171篇
  1996年   151篇
  1995年   86篇
  1994年   103篇
  1993年   96篇
  1992年   67篇
  1991年   49篇
  1990年   66篇
  1989年   63篇
  1988年   38篇
  1987年   45篇
  1986年   41篇
  1985年   49篇
  1984年   62篇
  1983年   36篇
  1982年   30篇
  1981年   28篇
  1980年   38篇
  1979年   30篇
  1977年   36篇
  1976年   55篇
  1975年   24篇
  1973年   26篇
排序方式: 共有6043条查询结果,搜索用时 15 毫秒
991.
992.
Dimensional scaling approaches are widely used to develop multi-body human models in injury biomechanics research. Given the limited experimental data for any particular anthropometry, a validated model can be scaled to different sizes to reflect the biological variance of population and used to characterize the human response. This paper compares two scaling approaches at the whole-body level: one is the conventional mass-based scaling approach which assumes geometric similarity; the other is the structure-based approach which assumes additional structural similarity by using idealized mechanical models to account for the specific anatomy and expected loading conditions. Given the use of exterior body dimensions and a uniform Young’s modulus, the two approaches showed close values of the scaling factors for most body regions, with 1.5 % difference on force scaling factors and 13.5 % difference on moment scaling factors, on average. One exception was on the thoracic modeling, with 19.3 % difference on the scaling factor of the deflection. Two 6-year-old child models were generated from a baseline adult model as application example and were evaluated using recent biomechanical data from cadaveric pediatric experiments. The scaled models predicted similar impact responses of the thorax and lower extremity, which were within the experimental corridors; and suggested further consideration of age-specific structural change of the pelvis. Towards improved scaling methods to develop biofidelic human models, this comparative analysis suggests further investigation on interior anatomical geometry and detailed biological material properties associated with the demographic range of the population.  相似文献   
993.
Kriging is a well-established approximation technique for deterministic computer experiments. There are several Kriging variants and a comparative study is warranted to evaluate the different performance characteristics of the Kriging models in the computational fluid dynamics area, specifically in turbomachinery design where the most complex flow situations can be observed. Sufficiently accurate flow simulations can take a long time to converge. Hence, this type of simulation can benefit hugely from the computational cheap Kriging models to reduce the computational burden. The Kriging variants such as ordinary Kriging, universal Kriging and blind Kriging along with the commonly used response surface approximation (RSA) model were used to optimize the performance of a centrifugal impeller using CFD analysis. A Reynolds-averaged Navier–Stokes equation solver was utilized to compute the objective function responses. The responses along with the design variables were used to construct the Kriging variants and RSA functions. A hybrid genetic algorithm was used to find the optimal point in the design space. It was found that the best optimal design was produced by blind Kriging, while the RSA identified the worst optimal design. By changing the shape of the impeller, a reduction in inlet recirculation was observed, which resulted into an increase in efficiency.  相似文献   
994.
Collective knowledge systems: Where the Social Web meets the Semantic Web   总被引:2,自引:0,他引:2  
What can happen if we combine the best ideas from the Social Web and Semantic Web? The Social Web is an ecosystem of participation, where value is created by the aggregation of many individual user contributions. The Semantic Web is an ecosystem of data, where value is created by the integration of structured data from many sources. What applications can best synthesize the strengths of these two approaches, to create a new level of value that is both rich with human participation and powered by well-structured information? This paper proposes a class of applications called collective knowledge systems, which unlock the “collective intelligence” of the Social Web with knowledge representation and reasoning techniques of the Semantic Web.  相似文献   
995.
Recently, there has been an increasing interest in directed probabilistic logical models and a variety of formalisms for describing such models has been proposed. Although many authors provide high-level arguments to show that in principle models in their formalism can be learned from data, most of the proposed learning algorithms have not yet been studied in detail. We introduce an algorithm, generalized ordering-search, to learn both structure and conditional probability distributions (CPDs) of directed probabilistic logical models. The algorithm is based on the ordering-search algorithm for Bayesian networks. We use relational probability trees as a representation for the CPDs. We present experiments on a genetics domain, blocks world domains and the Cora dataset. Editors: Stephen Muggleton, Ramon Otero, Simon Colton.  相似文献   
996.
We study assignment games in which jobs select machines, and in which certain pairs of jobs may conflict, which is to say they may incur an additional cost when they are both assigned to the same machine, beyond that associated with the increase in load. Questions regarding such interactions apply beyond allocating jobs to machines: when people in a social network choose to align themselves with a group or party, they typically do so based upon not only the inherent quality of that group, but also who amongst their friends (or enemies) chooses that group as well. We show how semi-smoothness, a recently introduced generalization of smoothness, is necessary to find tight bounds on the robust price of anarchy, and thus on the quality of correlated and Nash equilibria, for several natural job-assignment games with interacting jobs. For most cases, our bounds on the robust price of anarchy are either exactly 2 or approach 2. We also prove new convergence results implied by semi-smoothness for our games. Finally we consider coalitional deviations, and prove results about the existence and quality of strong equilibrium.  相似文献   
997.
998.
Explaining IS continuance in environments where usage is mandatory   总被引:1,自引:0,他引:1  
Several research efforts over the last decade have attempted to explain user acceptance in mandated environments. This research is an attempt in the same direction. It addresses users’ satisfaction in mandated environments to further contribute to our understanding of how we can manage mandated use of information systems (IS) effectively beyond initial adoption. To better explain users’ IS continuance a revised post-acceptance model is proposed and empirically tested using the structural equation modelling technique. The results demonstrate the reliability and validity of the proposed measurement model and further demonstrate that confirmed expectations and ease of use perceptions explain 61% of the users’ satisfaction in this setting. Our findings have important implications for the management of users in mandated environments as well as for further research in the area of mandated use. To that end, we offer directions for future research.  相似文献   
999.
We describe a fast, data-driven bandwidth selection procedure for kernel conditional density estimation (KCDE). Specifically, we give a Monte Carlo dual-tree algorithm for efficient, error-controlled approximation of a cross-validated likelihood objective. While exact evaluation of this objective has an unscalable O(n2) computational cost, our method is practical and shows speedup factors as high as 286,000 when applied to real multivariate datasets containing up to one million points. In absolute terms, computation times are reduced from months to minutes. This enables applications at much greater scale than previously possible. The core idea in our method is to first derive a standard deterministic dual-tree approximation, whose loose deterministic bounds we then replace with tight, probabilistic Monte Carlo bounds. The resulting Monte Carlo dual-tree algorithm exhibits strong error control and high speedup across a broad range of datasets several orders of magnitude greater in size than those reported in previous work. The cost of this high acceleration is the loss of the formal error guarantee of the deterministic dual-tree framework; however, our experiments show that error is still amply controlled by our Monte Carlo algorithm, and the many-order-of-magnitude speedups are worth this sacrifice in the large-data case, where cross-validated bandwidth selection for KCDE would otherwise be impractical.  相似文献   
1000.
In this paper a novel multiresolution human visual system and statistically based image coding scheme is presented. It decorrelates the input image into a number of subbands using a lifting based wavelet transform. The codec employs a novel statistical encoding algorithm to code the coefficients in the detail subbands. Perceptual weights are applied to regulate the threshold value of each detail subband that is required in the statistical encoding process. The baseband coefficients are losslessly coded. An extension of the codec to the progressive transmission of images is also developed. To evaluate the performance of the coding scheme, it was applied to a number of test images and its performance with and without perceptual weights is evaluated. The results indicate significant improvement in both subjective and objective quality of the reconstructed images when perceptual weights are employed. The performance of the proposed technique was also compared to JPEG and JPEG2000. The results show that the proposed coding scheme outperforms both coding standards at low compression ratios, while offering satisfactory performance at higher compression ratios.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号