全文获取类型
收费全文 | 74188篇 |
免费 | 3755篇 |
国内免费 | 1230篇 |
专业分类
电工技术 | 1849篇 |
技术理论 | 3篇 |
综合类 | 2084篇 |
化学工业 | 10754篇 |
金属工艺 | 2020篇 |
机械仪表 | 2665篇 |
建筑科学 | 2934篇 |
矿业工程 | 1332篇 |
能源动力 | 1372篇 |
轻工业 | 5532篇 |
水利工程 | 1114篇 |
石油天然气 | 1528篇 |
武器工业 | 162篇 |
无线电 | 4894篇 |
一般工业技术 | 7851篇 |
冶金工业 | 23018篇 |
原子能技术 | 432篇 |
自动化技术 | 9629篇 |
出版年
2024年 | 155篇 |
2023年 | 731篇 |
2022年 | 841篇 |
2021年 | 1373篇 |
2020年 | 1001篇 |
2019年 | 945篇 |
2018年 | 1527篇 |
2017年 | 1793篇 |
2016年 | 2207篇 |
2015年 | 2042篇 |
2014年 | 2081篇 |
2013年 | 2761篇 |
2012年 | 4386篇 |
2011年 | 4595篇 |
2010年 | 2519篇 |
2009年 | 2474篇 |
2008年 | 2348篇 |
2007年 | 2240篇 |
2006年 | 2114篇 |
2005年 | 4609篇 |
2004年 | 3494篇 |
2003年 | 2895篇 |
2002年 | 1716篇 |
2001年 | 1417篇 |
2000年 | 915篇 |
1999年 | 1279篇 |
1998年 | 6676篇 |
1997年 | 4260篇 |
1996年 | 2923篇 |
1995年 | 1811篇 |
1994年 | 1409篇 |
1993年 | 1356篇 |
1992年 | 439篇 |
1991年 | 440篇 |
1990年 | 380篇 |
1989年 | 356篇 |
1988年 | 381篇 |
1987年 | 272篇 |
1986年 | 247篇 |
1985年 | 205篇 |
1984年 | 97篇 |
1983年 | 98篇 |
1982年 | 152篇 |
1981年 | 196篇 |
1980年 | 208篇 |
1979年 | 86篇 |
1978年 | 108篇 |
1977年 | 617篇 |
1976年 | 1328篇 |
1975年 | 106篇 |
排序方式: 共有10000条查询结果,搜索用时 93 毫秒
991.
Alexandre Petrenko Adenilso Simao José Carlos Maldonado 《International Journal on Software Tools for Technology Transfer (STTT)》2012,14(4):383-386
Model-based testing is focused on testing techniques which rely on the use of models. The diversity of systems and software to be tested implies the need for research on a variety of models and methods for test automation. We briefly review this research area and introduce several papers selected from the 22nd International Conference on Testing Software and Systems (ICTSS). 相似文献
992.
José M. Cecilia José M. García Ginés D. Guerrero Miguel A. Martínez-del-Amor Mario J. Pérez-Jiménez Manuel Ujaldón 《Soft Computing - A Fusion of Foundations, Methodologies and Applications》2012,16(2):231-246
Membrane Computing is a discipline aiming to abstract formal computing models, called membrane systems or P systems, from the structure and functioning of the living cells as well as from the cooperation of cells in tissues, organs, and
other higher order structures. This framework provides polynomial time solutions to NP-complete problems by trading space
for time, and whose efficient simulation poses challenges in three different aspects: an intrinsic massively parallelism of
P systems, an exponential computational workspace, and a non-intensive floating point nature. In this paper, we analyze the
simulation of a family of recognizer P systems with active membranes that solves the Satisfiability problem in linear time
on different instances of Graphics Processing Units (GPUs). For an efficient handling of the exponential workspace created
by the P systems computation, we enable different data policies to increase memory bandwidth and exploit data locality through
tiling and dynamic queues. Parallelism inherent to the target P system is also managed to demonstrate that GPUs offer a valid
alternative for high-performance computing at a considerably lower cost. Furthermore, scalability is demonstrated on the way
to the largest problem size we were able to run, and considering the new hardware generation from Nvidia, Fermi, for a total
speed-up exceeding four orders of magnitude when running our simulations on the Tesla S2050 server. 相似文献
993.
Tomá? Kroupa 《Soft Computing - A Fusion of Foundations, Methodologies and Applications》2012,16(11):1851-1861
We generalise belief functions to many-valued events which are represented by elements of Lindenbaum algebra of infinite-valued ?ukasiewicz propositional logic. Our approach is based on mass assignments used in the Dempster–Shafer theory of evidence. A generalised belief function is totally monotone and it has Choquet integral representation with respect to a unique belief measure on Boolean events. 相似文献
994.
A population protocol is one of distributed computing models for passively-mobile systems, where a number of agents change
their states by pairwise interactions between two agents. In this paper, we investigate the solvability of the self-stabilizing
leader election in population protocols without any kind of oracles. We identify the necessary and sufficient conditions to
solve the self-stabilizing leader election in population protocols from the aspects of local memory complexity and fairness
assumptions. This paper shows that under the assumption of global fairness, no protocol using only n−1 states can solve the self-stabilizing leader election in complete interaction graphs, where n is the number of agents in the system. To prove this impossibility, we introduce a novel proof technique, called closed-set
argument. In addition, we propose a self-stabilizing leader election protocol using n states that works even under the unfairness assumption. This protocol requires the exact knowledge about the number of agents
in the system. We also show that such knowledge is necessary to construct any self-stabilizing leader election protocol. 相似文献
995.
J?rg Brunsmann Wolfgang Wilkes Gunter Schlageter Matthias Hemmje 《International Journal on Digital Libraries》2012,12(1):27-39
Providing access to digital information for the indefinite future is the intention of long-term digital preservation systems. One application domain that certainly needs to implement such long-term digital preservation processes is the design and engineering industry. In this industry, products are designed, manufactured, and operated with the help of sophisticated software tools provided by product lifecycle management (PLM) systems. During all PLM phases, including geographically distributed cross-domain and cross-company collaboration, a huge amount of heterogeneous digital product data and metadata is created. Legal and economic requirements demand that this product data has to be archived and preserved for a long-time period. Unfortunately, the software that is able to interpret the data will become obsolete earlier than the data since the software and hardware lifecycle is relatively short-lived compared to a product lifecycle. Companies in the engineering industry begin to realize that their data is in danger of becoming unusable while the products are in operation for several decades. To address this issue, different academic and industrial initiatives have been initiated that try to solve this problem. This article provides an overview of these projects including their motivations, identified problems, and proposed solutions. The studied projects are also verified against a classification of important aspects regarding scope and functionality of digital preservation in the engineering industry. Finally, future research topics are identified. 相似文献
996.
We study the classical approximate string matching problem, that is, given strings P and Q and an error threshold k, find all ending positions of substrings of Q whose edit distance to P is at most k. Let P and Q have lengths m and n, respectively. On a standard unit-cost word RAM with word size w≥log n we present an algorithm using time
O(nk ·min(\fraclog2 mlogn,\fraclog2 mlogww) + n)O\biggl(nk \cdot \min\biggl(\frac{\log^2 m}{\log n},\frac{\log^2 m\log w}{w}\biggr) + n\biggr) 相似文献
997.
998.
We discuss how the standard Cost-Benefit Analysis should be modified in order to take risk (and uncertainty) into account. We propose different approaches used in finance (Value at Risk, Conditional Value at Risk, Downside Risk Measures, and Efficiency Ratio) as useful tools to model the impact of risk in project evaluation. After introducing the concepts, we show how they could be used in CBA and provide some simple examples to illustrate how such concepts can be applied to evaluate the desirability of a new project infrastructure. 相似文献
999.
Pouria Pirzadeh Junichi Tatemura Oliver Po Hakan Hac?gümü? 《Journal of Grid Computing》2012,10(1):109-132
Recently there has been a considerable increase in the number of different Key-Value stores, for supporting data storage and
applications on the cloud environment. While all these solutions try to offer highly available and scalable services on the
cloud, they are significantly different with each other in terms of the architecture and types of the applications, they try
to support. Considering three widely-used such systems: Cassandra, HBase and Voldemort; in this paper we compare them in terms
of their support for different types of query workloads. We are mainly focused on the range queries. Unlike HBase and Cassandra
that have built-in support for range queries, Voldemort does not support this type of queries via its available API. For this
matter, practical techniques are presented on top of Voldemort to support range queries. Our performance evaluation is based
on mixed query workloads, in the sense that they contain a combination of short and long range queries, beside other types
of typical queries on key-value stores such as lookup and update. We show that there are trade-offs in the performance of
the selected system and scheme, and the types of the query workloads that can be processed efficiently. 相似文献
1000.
Entanglement mean field theory is an approximate method for dealing with many-body systems, especially for the prediction
of the onset of phase transitions. While previous studies have concentrated mainly on applications of the theory on short-range
interaction models, we show here that it can be efficiently applied also to systems with long-range interaction Hamiltonians.
We consider the (quantum) Lipkin–Meshkov–Glick spin model, and derive the entanglement mean field theory reduced Hamiltonian.
A similar recipe can be applied to obtain entanglement mean field theory reduced Hamiltonians corresponding to other long-range
interaction systems. We show, in particular, that the zero temperature quantum phase transition present in the Lipkin–Meshkov–Glick
model can be accurately predicted by the theory. 相似文献
|