首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   3744篇
  免费   150篇
  国内免费   7篇
电工技术   57篇
综合类   9篇
化学工业   515篇
金属工艺   67篇
机械仪表   75篇
建筑科学   290篇
矿业工程   14篇
能源动力   133篇
轻工业   358篇
水利工程   45篇
石油天然气   15篇
无线电   297篇
一般工业技术   656篇
冶金工业   388篇
原子能技术   20篇
自动化技术   962篇
  2024年   6篇
  2023年   20篇
  2022年   32篇
  2021年   68篇
  2020年   52篇
  2019年   70篇
  2018年   102篇
  2017年   121篇
  2016年   107篇
  2015年   97篇
  2014年   114篇
  2013年   283篇
  2012年   229篇
  2011年   284篇
  2010年   220篇
  2009年   196篇
  2008年   261篇
  2007年   222篇
  2006年   211篇
  2005年   162篇
  2004年   115篇
  2003年   117篇
  2002年   124篇
  2001年   75篇
  2000年   76篇
  1999年   65篇
  1998年   47篇
  1997年   49篇
  1996年   45篇
  1995年   45篇
  1994年   31篇
  1993年   20篇
  1992年   30篇
  1991年   18篇
  1990年   17篇
  1989年   18篇
  1988年   14篇
  1987年   17篇
  1986年   12篇
  1985年   16篇
  1984年   20篇
  1983年   7篇
  1982年   10篇
  1981年   8篇
  1980年   6篇
  1979年   14篇
  1978年   8篇
  1976年   4篇
  1975年   4篇
  1969年   3篇
排序方式: 共有3901条查询结果,搜索用时 15 毫秒
31.
An analysis of flow of a power law fluid in a spiral mandrel die is presented. The analysis is applied to study the effect of various die design parameters on the flow distribution at the end of the spiral mandrel section. Three variables that have a very strong effect on the flow distribution are the number of grooves, the initial clearance, and the groove helix angle. The distribution is improved by increasing the number of grooves, by using a non-zero initial clearance, and a relatively small helix angle. Two more variables that have a significant effect on the flow distribution are the taper angle and the initial groove depth. The optimum taper angle was found to be between 1 and 3 degrees. The distribution uniformity improves with the initial groove depth, while the pressure drop reduces at the same time.  相似文献   
32.
33.
Increasing numbers of hard environmental constraints are being imposed in urban traffic networks by authorities in an attempt to mitigate pollution caused by traffic. However, it is not trivial for authorities to assess the cost of imposing such hard environmental constraints. This leads to difficulties when setting the constraining values as well as implementing effective control measures. For that reason, quantifying the cost of imposing hard environmental constraints for a certain network becomes crucial. This paper first indicates that for a given network, such cost is not only related to the attribution of environmental constraints but also related to the considered control measures. Next, we present an assessment criterion that quantifies the loss of optimality under the control measures considered by introducing the environmental constraints. The criterion can be acquired by solving a bi-level programming problem with/without environmental constraints. A simple case study shows its practicability as well as the differences between this framework and other frameworks integrating the environmental aspects. This proposed framework is widely applicable when assessing the interaction of traffic and its environmental aspects.  相似文献   
34.
We introduce a construction that turns a category of pure state spaces and operators into a category of observable algebras and superoperators. For example, it turns the category of finite-dimensional Hilbert spaces into the category of finite-dimensional C*-algebras and completely positive maps. In particular, the new category contains both quantum and classical channels, providing elegant abstract notions of preparation and measurement. We also consider nonstandard models that can be used to investigate which notions from algebraic quantum information theory are operationally justifiable.  相似文献   
35.
This work presents a method for efficiently simplifying the pressure projection step in a liquid simulation. We first devise a straightforward dimension reduction technique that dramatically reduces the cost of solving the pressure projection. Next, we introduce a novel change of basis that satisfies free‐surface boundary conditions exactly, regardless of the accuracy of the pressure solve. When combined, these ideas greatly reduce the computational complexity of the pressure solve without compromising free surface boundary conditions at the highest level of detail. Our techniques are easy to parallelize, and they effectively eliminate the computational bottleneck for large liquid simulations.  相似文献   
36.
Support for generic programming was added to the Java language in 2004, representing perhaps the most significant change to one of the most widely used programming languages today. Researchers and language designers anticipated this addition would relieve many long-standing problems plaguing developers, but surprisingly, no one has yet measured how generics have been adopted and used in practice. In this paper, we report on the first empirical investigation into how Java generics have been integrated into open source software by automatically mining the history of 40 popular open source Java programs, traversing more than 650 million lines of code in the process. We evaluate five hypotheses and research questions about how Java developers use generics. For example, our results suggest that generics sometimes reduce the number of type casts and that generics are usually adopted by a single champion in a project, rather than all committers. We also offer insights into why some features may be adopted sooner and others features may be held back.  相似文献   
37.
The JISC-funded Focus of Access to Institutional Resources (FAIR) Programme ran from 2002-2005. The 14 projects within this programme investigated the cultural, organisational, legal and technical factors involved in providing places where institutional digital content, of which there is an increasing amount, could be stored and subsequently shared with others in the Higher and Further Education communities where appropriate. The primary technology to enable such sharing is the Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH), a lightweight protocol based on sharing metadata about the digital content available. The technical issues were at times overshadowed by the cultural, organisational and legal issues that had to be addressed. The experience of the Programme as a whole provides a valuable insight to the issues at hand in sharing content and a good starting point for other institutions wishing to investigate this capability. A Synthesis of the Programme was commissioned in late 2004 to capture this experience, and all tangible outputs where produced. A website was produced providing a comprehensive listing of all project outputs and a printed brochure was published in late 2005 as an introduction to the Programme and its findings. This article summarises the findings of the FAIR Synthesis and provides a range of pointers to further information for subsequent investigation.  相似文献   
38.
The classical Geiringer theorem addresses the limiting frequency of occurrence of various alleles after repeated application of crossover. It has been adopted to the setting of evolutionary algorithms and, a lot more recently, reinforcement learning and Monte-Carlo tree search methodology to cope with a rather challenging question of action evaluation at the chance nodes. The theorem motivates novel dynamic parallel algorithms that are explicitly described in the current paper for the first time. The algorithms involve independent agents traversing a dynamically constructed directed graph that possibly has loops and multiple edges. A rather elegant and profound category-theoretic model of cognition in biological neural networks developed by a well-known French mathematician, professor Andree Ehresmann jointly with a neurosurgeon, Jan Paul Vanbremeersch over the last thirty years provides a hint at the connection between such algorithms and Hebbian learning.  相似文献   
39.
Understanding, monitoring and modelling attributes of seagrass biodiversity, such as species composition, richness, abundance, spatial patterns, and disturbance dynamics, requires spatial information. This work assessed the accuracy of commonly available airborne hyper-spectral and satellite multi-spectral image data sets for mapping seagrass species composition, horizontal horizontal-projected foliage cover and above-ground dry-weight biomass. The work was carried out on the Eastern Banks in Moreton Bay, Australia, an area of shallow and clear coastal waters, containing a range of seagrass species, cover and biomass levels. Two types of satellite image data were used: Quickbird-2 multi-spectral and Landsat-5 Thematic Mapper multi-spectral. Airborne hyper-spectral image data were acquired from a CASI-2 sensor using a pixel size of 4.0 m. The mapping was constrained to depths shallower than 3.0 m, based on past modelling of the separability of seagrass reflectance signatures at increasing water depths. Our results demonstrated that mapping of seagrass cover, species and biomass to high accuracy levels (> 80%) was not possible across all image types. For each parameter mapped, airborne hyper-spectral data produced the highest overall accuracies (46%), followed by Quickbird-2 and then Landsat-5 Thematic Mapper. The low accuracy levels were attributed to the mapping methods and difficulties in matching locations on image and field data sets. Accurate mapping of seagrass cover, species composition and biomass, using simple approaches, requires further work using high-spatial resolution (< 5 m) and/or hyper-spectral image data. Further work is required to determine if and how the seagrass maps produced in this work are suitable for measuring attributes of seagrass biodiversity, and using these data for modelling floral and fauna biodiversity properties of seagrass environments, and for scaling-up seagrass ecosystem models.  相似文献   
40.
This paper describes methods for recovering time-varying shape and motion of non-rigid 3D objects from uncalibrated 2D point tracks. For example, given a video recording of a talking person, we would like to estimate the 3D shape of the face at each instant, and learn a model of facial deformation. Time-varying shape is modeled as a rigid transformation combined with a non-rigid deformation. Reconstruction is ill-posed if arbitrary deformations are allowed, and thus additional assumptions about deformations are required. We first suggest restricting shapes to lie within a low-dimensional subspace, and describe estimation algorithms. However, this restriction alone is insufficient to constrain reconstruction. To address these problems, we propose a reconstruction method using a Probabilistic Principal Components Analysis (PPCA) shape model, and an estimation algorithm that simultaneously estimates 3D shape and motion for each instant, learns the PPCA model parameters, and robustly fills-in missing data points. We then extend the model to model temporal dynamics in object shape, allowing the algorithm to robustly handle severe cases of missing data.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号