全文获取类型
收费全文 | 93695篇 |
免费 | 1760篇 |
国内免费 | 426篇 |
专业分类
电工技术 | 919篇 |
综合类 | 2335篇 |
化学工业 | 13951篇 |
金属工艺 | 5102篇 |
机械仪表 | 3419篇 |
建筑科学 | 2407篇 |
矿业工程 | 572篇 |
能源动力 | 1488篇 |
轻工业 | 4560篇 |
水利工程 | 1302篇 |
石油天然气 | 347篇 |
武器工业 | 1篇 |
无线电 | 10802篇 |
一般工业技术 | 18638篇 |
冶金工业 | 3294篇 |
原子能技术 | 364篇 |
自动化技术 | 26380篇 |
出版年
2023年 | 153篇 |
2022年 | 230篇 |
2021年 | 395篇 |
2020年 | 265篇 |
2019年 | 270篇 |
2018年 | 14704篇 |
2017年 | 13642篇 |
2016年 | 10347篇 |
2015年 | 996篇 |
2014年 | 740篇 |
2013年 | 908篇 |
2012年 | 3849篇 |
2011年 | 10259篇 |
2010年 | 8876篇 |
2009年 | 6133篇 |
2008年 | 7247篇 |
2007年 | 8199篇 |
2006年 | 537篇 |
2005年 | 1521篇 |
2004年 | 1451篇 |
2003年 | 1430篇 |
2002年 | 735篇 |
2001年 | 285篇 |
2000年 | 364篇 |
1999年 | 228篇 |
1998年 | 317篇 |
1997年 | 185篇 |
1996年 | 172篇 |
1995年 | 87篇 |
1994年 | 99篇 |
1993年 | 74篇 |
1992年 | 69篇 |
1991年 | 66篇 |
1990年 | 39篇 |
1989年 | 39篇 |
1988年 | 35篇 |
1987年 | 24篇 |
1969年 | 29篇 |
1968年 | 45篇 |
1967年 | 35篇 |
1966年 | 48篇 |
1965年 | 44篇 |
1963年 | 28篇 |
1960年 | 31篇 |
1959年 | 38篇 |
1958年 | 41篇 |
1957年 | 36篇 |
1956年 | 34篇 |
1955年 | 64篇 |
1954年 | 68篇 |
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
991.
Piotr Zieliński 《Distributed Computing》2008,20(6):435-450
The Atomic Broadcast algorithm described in this paper can deliver messages in two communication steps, even if multiple processes
broadcast at the same time. It tags all broadcast messages with the local real time, and delivers all messages in the order
of these timestamps. Both positive and negative statements are used: “m broadcast at time 51” vs. “no messages broadcast between times 31 and 51”. To prevent crashed processes from blocking the
system, the -elected leader broadcasts negative statements on behalf of the processes it suspects () to have crashed. A new cheap Generic Broadcast algorithm is used to ensure consistency between conflicting statements. It
requires only a majority of correct processes (n > 2f) and, in failure-free runs, delivers all non-conflicting messages in two steps. The main algorithm satisfies several new
lower bounds, which are proved in this paper. 相似文献
992.
This paper presents an efficient scheme maintaining a separator decomposition representation in dynamic trees using asymptotically optimal labels. In order to maintain the short labels, the scheme uses relatively low
message complexity. In particular, if the initial dynamic tree contains only the root, then the scheme incurs an O(log4
n) amortized message complexity per topology change, where n is the current number of vertices in the tree. As a separator decomposition is a fundamental decomposition of trees used
extensively as a component in many static graph algorithms, our dynamic scheme for separator decomposition may be used for
constructing dynamic versions to these algorithms. The paper then shows how to use our dynamic separator decomposition to
construct efficient labeling schemes on dynamic trees, using the same message complexity as our dynamic separator scheme.
Specifically, we construct efficient routing schemes on dynamic trees, for both the designer and the adversary port models,
which maintain optimal labels, up to a multiplicative factor of O(log log n). In addition, it is shown how to use our dynamic separator decomposition scheme to construct dynamic labeling schemes supporting
the ancestry and NCA relations using asymptotically optimal labels, as well as to extend a known result on dynamic distance
labeling schemes.
Supported in part at the Technion by an Aly Kaufman fellowship.
Supported in part by a grant from the Israel Science Foundation. 相似文献
993.
Question-Answering Bulletin Boards (QABB), such as Yahoo! Answers and Windows Live QnA, are gaining popularity recently. Questions
are submitted on QABB and let somebody in the internet answer them. Communications on QABB connect users, and the overall
connections can be regarded as a social network. If the evolution of social networks can be predicted, it is quite useful
for encouraging communications among users. Link prediction on QABB can be used for recommendation to potential answerers.
Previous approaches for link prediction based on structural properties do not take weights of links into account. This paper
describes an improved method for predicting links based on weighted proximity measures of social networks. The method is based
on an assumption that proximities between nodes can be estimated better by using both graph proximity measures and the weights
of existing links in a social network. In order to show the effectiveness of our method, the data of Yahoo! Chiebukuro (Japanese
Yahoo! Answers) are used for our experiments. The results show that our method outperforms previous approaches, especially
when target social networks are sufficiently dense.
相似文献
Tsuyoshi MurataEmail: |
994.
In order to be capable of exploiting context for pro-active information recommendation, agents need to extract and understand
user activities based on their knowledge of the user interests. In this paper, we propose a novel approach for context-aware
recommendation in browsing assistants based on the integration of user profiles, navigational patterns and contextual elements.
In this approach, user profiles built using an unsupervised Web page clustering algorithm are used to characterize user ongoing
activities and behavior patterns. Experimental evidence show that using longer-term interests to explain active browsing goals
user assistance is effectively enhanced.
相似文献
Analía AmandiEmail: |
995.
Jeremy Brandman 《Journal of scientific computing》2008,37(3):282-315
We demonstrate, through separation of variables and estimates from the semi-classical analysis of the Schrödinger operator, that the eigenvalues of an elliptic operator defined on a compact hypersurface in ? n can be found by solving an elliptic eigenvalue problem in a bounded domain Ω?? n . The latter problem is solved using standard finite element methods on the Cartesian grid. We also discuss the application of these ideas to solving evolution equations on surfaces, including a new proof of a result due to Greer (J. Sci. Comput. 29(3):321–351, 2006). 相似文献
996.
Colin B. Macdonald Sigal Gottlieb Steven J. Ruuth 《Journal of scientific computing》2008,36(1):89-112
Diagonally split Runge–Kutta (DSRK) time discretization methods are a class of implicit time-stepping schemes which offer
both high-order convergence and a form of nonlinear stability known as unconditional contractivity. This combination is not
possible within the classes of Runge–Kutta or linear multistep methods and therefore appears promising for the strong stability
preserving (SSP) time-stepping community which is generally concerned with computing oscillation-free numerical solutions
of PDEs. Using a variety of numerical test problems, we show that although second- and third-order unconditionally contractive
DSRK methods do preserve the strong stability property for all time step-sizes, they suffer from order reduction at large
step-sizes. Indeed, for time-steps larger than those typically chosen for explicit methods, these DSRK methods behave like
first-order implicit methods. This is unfortunate, because it is precisely to allow a large time-step that we choose to use
implicit methods. These results suggest that unconditionally contractive DSRK methods are limited in usefulness as they are
unable to compete with either the first-order backward Euler method for large step-sizes or with Crank–Nicolson or high-order
explicit SSP Runge–Kutta methods for smaller step-sizes.
We also present stage order conditions for DSRK methods and show that the observed order reduction is associated with the
necessarily low stage order of the unconditionally contractive DSRK methods.
The work of C.B. Macdonald was partially supported by an NSERC Canada PGS-D scholarship, a grant from NSERC Canada, and a
scholarship from the Pacific Institute for the Mathematical Sciences (PIMS).
The work of S. Gottlieb was supported by AFOSR grant number FA9550-06-1-0255.
The work of S.J. Ruuth was partially supported by a grant from NSERC Canada. 相似文献
997.
Feature selection is an important aspect of solving data-mining and machine-learning problems. This paper proposes a feature-selection
method for the Support Vector Machine (SVM) learning. Like most feature-selection methods, the proposed method ranks all features
in decreasing order of importance so that more relevant features can be identified. It uses a novel criterion based on the
probabilistic outputs of SVM. This criterion, termed Feature-based Sensitivity of Posterior Probabilities (FSPP), evaluates
the importance of a specific feature by computing the aggregate value, over the feature space, of the absolute difference
of the probabilistic outputs of SVM with and without the feature. The exact form of this criterion is not easily computable
and approximation is needed. Four approximations, FSPP1-FSPP4, are proposed for this purpose. The first two approximations
evaluate the criterion by randomly permuting the values of the feature among samples of the training data. They differ in
their choices of the mapping function from standard SVM output to its probabilistic output: FSPP1 uses a simple threshold
function while FSPP2 uses a sigmoid function. The second two directly approximate the criterion but differ in the smoothness
assumptions of criterion with respect to the features. The performance of these approximations, used in an overall feature-selection
scheme, is then evaluated on various artificial problems and real-world problems, including datasets from the recent Neural
Information Processing Systems (NIPS) feature selection competition. FSPP1-3 show good performance consistently with FSPP2
being the best overall by a slight margin. The performance of FSPP2 is competitive with some of the best performing feature-selection
methods in the literature on the datasets that we have tested. Its associated computations are modest and hence it is suitable
as a feature-selection method for SVM applications.
Editor: Risto Miikkulainen. 相似文献
998.
Dongsoo Kang Chen Liu Jean-Luc Gaudiot 《International journal of parallel programming》2008,36(4):361-385
By executing two or more threads concurrently, Simultaneous MultiThreading (SMT) architectures are able to exploit both Instruction-Level
Parallelism (ILP) and Thread-Level Parallelism (TLP) from the increased number of in-flight instructions that are fetched
from multiple threads. However, due to incorrect control speculations, a significant number of these in-flight instructions
are discarded from the pipelines of SMT processors (which is a direct consequence of these pipelines getting wider and deeper).
Although increasing the accuracy of branch predictors may reduce the number of instructions so discarded from the pipelines,
the prediction accuracy cannot be easily scaled up since aggressive branch prediction schemes strongly depend on the particular
predictability inherently to the application programs. In this paper, we present an efficient thread scheduling mechanism
for SMT processors, called SAFE-T (Speculation-Aware Front-End Throttling): it is easy to implement and allows an SMT processor
to selectively perform speculative execution of threads according to the confidence level on branch predictions, hence preventing
wrong-path instructions from being fetched. SAFE-T provides an average reduction of 57.9% in the number of discarded instructions
and improves the instructions per cycle (IPC) performance by 14.7% on average over the ICOUNT policy across the multi-programmed
workloads we simulate.
This paper is an extended version of the paper, “Speculation Control for Simultaneous Multithreading,” which appeared in the
Proceedings of the 18th International Parallel and Distributed Processing Symposium, Santa Fe, New Mexico, April 2004. 相似文献
999.
Boosted Bayesian network classifiers 总被引:2,自引:0,他引:2
The use of Bayesian networks for classification problems has received a significant amount of recent attention. Although computationally
efficient, the standard maximum likelihood learning method tends to be suboptimal due to the mismatch between its optimization
criteria (data likelihood) and the actual goal of classification (label prediction accuracy). Recent approaches to optimizing
classification performance during parameter or structure learning show promise, but lack the favorable computational properties
of maximum likelihood learning. In this paper we present boosted Bayesian network classifiers, a framework to combine discriminative
data-weighting with generative training of intermediate models. We show that boosted Bayesian network classifiers encompass
the basic generative models in isolation, but improve their classification performance when the model structure is suboptimal.
We also demonstrate that structure learning is beneficial in the construction of boosted Bayesian network classifiers. On
a large suite of benchmark data-sets, this approach outperforms generative graphical models such as naive Bayes and TAN in
classification accuracy. Boosted Bayesian network classifiers have comparable or better performance in comparison to other
discriminatively trained graphical models including ELR and BNC. Furthermore, boosted Bayesian networks require significantly
less training time than the ELR and BNC algorithms. 相似文献
1000.
Inductive transfer with context-sensitive neural networks 总被引:1,自引:1,他引:0
Context-sensitive Multiple Task Learning, or csMTL, is presented as a method of inductive transfer which uses a single output neural network and additional contextual inputs
for learning multiple tasks. Motivated by problems with the application of MTL networks to machine lifelong learning systems,
csMTL encoding of multiple task examples was developed and found to improve predictive performance. As evidence, the csMTL method is tested on seven task domains and shown to produce hypotheses for primary tasks that are often better than standard
MTL hypotheses when learning in the presence of related and unrelated tasks. We argue that the reason for this performance
improvement is a reduction in the number of effective free parameters in the csMTL network brought about by the shared output node and weight update constraints due to the context inputs. An examination
of IDT and SVM models developed from csMTL encoded data provides initial evidence that this improvement is not shared across all machine learning models. 相似文献