全文获取类型
收费全文 | 3395篇 |
免费 | 86篇 |
国内免费 | 2篇 |
专业分类
电工技术 | 30篇 |
化学工业 | 382篇 |
金属工艺 | 57篇 |
机械仪表 | 41篇 |
建筑科学 | 137篇 |
矿业工程 | 40篇 |
能源动力 | 83篇 |
轻工业 | 261篇 |
水利工程 | 30篇 |
石油天然气 | 37篇 |
武器工业 | 4篇 |
无线电 | 455篇 |
一般工业技术 | 439篇 |
冶金工业 | 859篇 |
原子能技术 | 30篇 |
自动化技术 | 598篇 |
出版年
2022年 | 14篇 |
2021年 | 37篇 |
2020年 | 25篇 |
2019年 | 37篇 |
2018年 | 47篇 |
2017年 | 50篇 |
2016年 | 65篇 |
2015年 | 51篇 |
2014年 | 77篇 |
2013年 | 179篇 |
2012年 | 99篇 |
2011年 | 173篇 |
2010年 | 122篇 |
2009年 | 109篇 |
2008年 | 150篇 |
2007年 | 138篇 |
2006年 | 133篇 |
2005年 | 110篇 |
2004年 | 94篇 |
2003年 | 99篇 |
2002年 | 75篇 |
2001年 | 60篇 |
2000年 | 52篇 |
1999年 | 77篇 |
1998年 | 191篇 |
1997年 | 136篇 |
1996年 | 154篇 |
1995年 | 78篇 |
1994年 | 76篇 |
1993年 | 78篇 |
1992年 | 49篇 |
1991年 | 28篇 |
1990年 | 49篇 |
1989年 | 43篇 |
1988年 | 37篇 |
1987年 | 34篇 |
1986年 | 28篇 |
1985年 | 38篇 |
1984年 | 41篇 |
1983年 | 29篇 |
1982年 | 22篇 |
1981年 | 22篇 |
1980年 | 26篇 |
1979年 | 21篇 |
1978年 | 24篇 |
1977年 | 30篇 |
1976年 | 49篇 |
1975年 | 17篇 |
1973年 | 22篇 |
1971年 | 14篇 |
排序方式: 共有3483条查询结果,搜索用时 46 毫秒
81.
Neural network modules based on page-oriented dynamic digital photorefractive memory are described. The modules can implement two different interconnection organizations, fan-out and fan-in, depending on their target network applications. Neural network learning is realized by the real-time memory update of dynamic digital photorefractive memory. Physical separation of subvolumes in the page-oriented photorefractive memory architecture contributes to the low cross talk and high diffraction efficiency of the stored interconnection weights. Digitally encoded interconnection weights ensure high accuracy, providing superior neural network system scalability. Module scalability and feedforward throughput have been investigated based on photorefractive memory geometry and the photodetector power requirements. The following four approaches to extend module scalability are discussed: partial optical summation, semiparallel feedforward operation, time partitioning, and interconnection matrix partitioning. Learning capabilities of the system are investigated in terms of required interconnection primitives for implementing learning processes and three memory-update schemes. The experimental results of Perceptron learning network implementation with 900 input neurons with digital 6-bit accuracy are reported. 相似文献
82.
83.
We consider the setting of a multiprocessor where the speeds of the m processors can be individually scaled. Jobs arrive over time and have varying degrees of parallelizability. A nonclairvoyant
scheduler must assign the processes to processors, and scale the speeds of the processors. We consider the objective of energy
plus flow time. We assume that a processor running at speed s uses power s
α
for some constant α>1. For processes that may have side effects or that are not checkpointable, we show an W(m(a-1)/a2)\Omega(m^{(\alpha -1)/\alpha^{2}}) bound on the competitive ratio of any randomized algorithm. For checkpointable processes without side effects, we give an
O(log m)-competitive algorithm. Thus for processes that may have side effects or that are not checkpointable, the achievable competitive
ratio grows quickly with the number of processors, but for checkpointable processes without side effects, the achievable competitive
ratio grows slowly with the number of processors. We then show a lower bound of Ω(log 1/α
m) on the competitive ratio of any randomized algorithm for checkpointable processes without side effects. 相似文献
84.
Jeff Jones 《Natural computing》2011,10(4):1345-1369
The single-celled organism Physarum polycephalum efficiently constructs and minimises dynamical nutrient transport networks resembling proximity graphs in the Toussaint hierarchy.
We present a particle model which collectively approximates the behaviour of Physarum. We demonstrate spontaneous transport network formation and complex network evolution using the model and show that the model
collectively exhibits quasi-physical emergent properties, allowing it to be considered as a virtual computing material. This
material is used as an unconventional method to approximate spatially represented geometry problems by representing network
nodes as nutrient sources. We demonstrate three different methods for the construction, evolution and minimisation of Physarum-like transport networks which approximate Steiner trees, relative neighbourhood graphs, convex hulls and concave hulls. We
extend the model to adapt population size in response to nutrient availability and show how network evolution is dependent
on relative node position (specifically inter-node angle), sensor scaling and nutrient concentration. We track network evolution
using a real-time method to record transport network topology in response to global differences in nutrient concentration.
We show how Steiner nodes are utilised at low nutrient concentrations whereas direct connections to nutrients are favoured
when nutrient concentration is high. The results suggest that the foraging and minimising behaviour of Physarum-like transport networks reflect complex interplay between nutrient concentration, nutrient location, maximising foraging
area coverage and minimising transport distance. The properties and behaviour of the synthetic virtual plasmodium may be useful
in future physical instances of distributed unconventional computing devices, and may also provide clues to the generation
of emergent computation behaviour by Physarum. 相似文献
85.
We describe a fast, data-driven bandwidth selection procedure for kernel conditional density estimation (KCDE). Specifically, we give a Monte Carlo dual-tree algorithm for efficient, error-controlled approximation of a cross-validated likelihood objective. While exact evaluation of this objective has an unscalable O(n2) computational cost, our method is practical and shows speedup factors as high as 286,000 when applied to real multivariate datasets containing up to one million points. In absolute terms, computation times are reduced from months to minutes. This enables applications at much greater scale than previously possible. The core idea in our method is to first derive a standard deterministic dual-tree approximation, whose loose deterministic bounds we then replace with tight, probabilistic Monte Carlo bounds. The resulting Monte Carlo dual-tree algorithm exhibits strong error control and high speedup across a broad range of datasets several orders of magnitude greater in size than those reported in previous work. The cost of this high acceleration is the loss of the formal error guarantee of the deterministic dual-tree framework; however, our experiments show that error is still amply controlled by our Monte Carlo algorithm, and the many-order-of-magnitude speedups are worth this sacrifice in the large-data case, where cross-validated bandwidth selection for KCDE would otherwise be impractical. 相似文献
86.
Jeff Winter Author Vitae Kari Rönkkö Author Vitae 《Journal of Systems and Software》2010,83(11):2059-2072
This article presents an experience report where we compare 8 years of experience of product related usability testing and evaluation with principles for software process improvement (SPI). In theory the product and the process views are often seen to be complementary, but studies of industry have demonstrated the opposite. Therefore, more empirical studies are needed to understand and improve the present situation. We find areas of close agreement as well as areas where our work illuminates new characteristics. It has been identified that successful SPI is dependent upon being successfully combined with a business orientation. Usability and business orientation also have strong connections although this has not been extensively addressed in SPI publications. Reasons for this could be that usability focuses on product metrics whilst today's SPI mainly focuses on process metrics. Also because today's SPI is dominated by striving towards a standardized, controllable, and predictable software engineering process; whilst successful usability efforts in organisations are more about creating a creative organisational culture advocating a useful product throughout the development and product life cycle. We provide a study and discussion that supports future development when combining usability and product focus with SPI, in particular if these efforts are related to usability process improvement efforts. 相似文献
87.
Anneliese A. Andrews Jeff Offutt Curtis Dyreson Christopher J. Mallery Kshamta Jerath Roger Alexander 《Information and Software Technology》2010,52(1):52-66
Web applications are fast becoming more widespread, larger, more interactive, and more essential to the international use of computers. It is well understood that web applications must be highly dependable, and as a field we are just now beginning to understand how to model and test Web applications. One straightforward technique is to model Web applications as finite state machines. However, large numbers of input fields, input choices and the ability to enter values in any order combine to create a state space explosion problem. This paper evaluates a solution that uses constraints on the inputs to reduce the number of transitions, thus compressing the FSM. The paper presents an analysis of the potential savings of the compression technique and reports actual savings from two case studies. 相似文献
88.
Web software applications have become complex, sophisticated programs that are based on novel computing technologies. Their
most essential characteristic is that they represent a different kind of software deployment—most of the software is never delivered to customers’ computers, but remains on servers, allowing customers to run the software
across the web. Although powerful, this deployment model brings new challenges to developers and testers. Checking static
HTML links is no longer sufficient; web applications must be evaluated as complex software products. This paper focuses on
three aspects of web applications that are unique to this type of deployment: (1) an extremely loose form of coupling that
features distributed integration, (2) the ability that users have to directly change the potential flow of execution, and (3) the dynamic creation of HTML forms. Taken together, these aspects allow the potential control flow to vary with each execution, thus the possible control flows cannot be determined statically, prohibiting several
standard analysis techniques that are fundamental to many software engineering activities. This paper presents a new way to
model web applications, based on software couplings that are new to web applications, dynamic flow of control, distributed
integration, and partial dynamic web application development. This model is based on the notion of atomic sections, which allow analysis tools to build the analog of a control flow graph for web applications. The atomic section model has
numerous applications in web applications; this paper applies the model to the problem of testing web applications. 相似文献
89.
We give an example of a monoid with finitely many left and right ideals, all of whose Schützenberger groups are presentable by finite complete rewriting systems, and so each have finite derivation type, but such that the monoid itself does not have finite derivation type, and therefore does not admit a presentation by a finite complete rewriting system. The example also serves as a counterexample to several other natural questions regarding complete rewriting systems and finite derivation type. Specifically it allows us to construct two finitely generated monoids M and N with isometric Cayley graphs, where N has finite derivation type (respectively, admits a presentation by a finite complete rewriting system) but M does not. This contrasts with the case of finitely generated groups for which finite derivation type is known to be a quasi-isometry invariant. The same example is also used to show that neither of these two properties is preserved under finite Green index extensions. 相似文献
90.
Creating non-minimal triangulations for use in inference in mixed stochastic/deterministic graphical models 总被引:1,自引:0,他引:1
We demonstrate that certain large-clique graph triangulations can be useful for reducing computational requirements when making queries on mixed stochastic/deterministic graphical models. This is counter to the conventional wisdom that triangulations that minimize clique size are always most desirable for use in computing queries on graphical models. Many of these large-clique triangulations are non-minimal and are thus unattainable via the popular elimination algorithm. We introduce ancestral pairs as the basis for novel triangulation heuristics and prove that no more than the addition of edges between ancestral pairs needs to be considered when searching for state space optimal triangulations in such graphs. Empirical results on random and real world graphs are given. We also present an algorithm and correctness proof for determining if a triangulation can be obtained via elimination, and we show that the decision problem associated with finding optimal state space triangulations in this mixed setting is NP-complete. 相似文献