全文获取类型
收费全文 | 1405篇 |
免费 | 61篇 |
国内免费 | 1篇 |
专业分类
电工技术 | 20篇 |
综合类 | 5篇 |
化学工业 | 192篇 |
金属工艺 | 35篇 |
机械仪表 | 40篇 |
建筑科学 | 123篇 |
矿业工程 | 4篇 |
能源动力 | 55篇 |
轻工业 | 119篇 |
水利工程 | 14篇 |
石油天然气 | 4篇 |
无线电 | 179篇 |
一般工业技术 | 195篇 |
冶金工业 | 90篇 |
原子能技术 | 7篇 |
自动化技术 | 385篇 |
出版年
2024年 | 3篇 |
2023年 | 14篇 |
2022年 | 16篇 |
2021年 | 30篇 |
2020年 | 26篇 |
2019年 | 17篇 |
2018年 | 41篇 |
2017年 | 32篇 |
2016年 | 50篇 |
2015年 | 43篇 |
2014年 | 45篇 |
2013年 | 114篇 |
2012年 | 87篇 |
2011年 | 81篇 |
2010年 | 86篇 |
2009年 | 71篇 |
2008年 | 67篇 |
2007年 | 92篇 |
2006年 | 75篇 |
2005年 | 50篇 |
2004年 | 49篇 |
2003年 | 63篇 |
2002年 | 30篇 |
2001年 | 23篇 |
2000年 | 30篇 |
1999年 | 23篇 |
1998年 | 20篇 |
1997年 | 12篇 |
1996年 | 22篇 |
1995年 | 11篇 |
1994年 | 11篇 |
1993年 | 13篇 |
1992年 | 19篇 |
1991年 | 15篇 |
1990年 | 10篇 |
1989年 | 8篇 |
1988年 | 6篇 |
1987年 | 4篇 |
1985年 | 9篇 |
1984年 | 7篇 |
1983年 | 4篇 |
1982年 | 4篇 |
1981年 | 3篇 |
1980年 | 5篇 |
1979年 | 4篇 |
1977年 | 3篇 |
1976年 | 3篇 |
1975年 | 4篇 |
1966年 | 2篇 |
1965年 | 2篇 |
排序方式: 共有1467条查询结果,搜索用时 15 毫秒
71.
72.
Tony Salvador John W. Sherry Alvaro E. Urrutia 《Information Technology for Development》2013,19(1):77-95
About 10% of the world has access to information and communication technologies (ICTs). Telecenters and cyber cafés are one prevalent way to increase access. This paper suggests increasing access through currently existing, local businesses where people already gather and where proprietors already posses existing business relationships with suppliers and customers. This paper questions the prevailing emphasis on the “cyber'' characteristics of access, e.g., computing and internet access as is currently known, and attempts to refocus the conversation by considering computing and access in the context of the “café,” e.g., as public life in the sense of Habermas, which permits an in situ evolution of relevant access. This analysis is based on extant literature and direct ethnographic research in several public places in six countries. We offer example design perspectives based on a reflection of “third places” as inspiration for appropriate innovation in the provision of computing and communications. © 2005 Wiley Periodicals, Inc. 相似文献
73.
With the popularity of parallel database machines based on the shared-nothing architecture, it has become important to find external sorting algorithms which lead to a load-balanced computation, i.e., balanced execution, communication and output. If during the course of the sorting algorithm each processor is equally loaded, parallelism is fully exploited. Similarly, balanced communication will not congest the network traffic. Since sorting can be used to support a number of other relational operations (joins, duplicate elimination, building indexes etc.) data skew produced by sorting can further lead to execution skew at later stages of these operations. In this paper we present a load-balanced parallel sorting algorithm for shared-nothing architectures. It is a multiple-input multiple-output algorithm with four stages, based on a generalization of Batcher's odd-even merge. At each stage then keys are evenly distributed among thep processors (i.e., there is no final sequential merge phase) and the distribution of keys between stages ensures against network congestion. There is no assumption made on the key distribution and the algorithm performs equally well in the presence of duplicate keys. Hence our approach always guarantees its performance, as long asn is greater thanp
3, which is the case of interest for sorting large relations. In addition, processors can be added incrementally.
Recommended by: Patrick Valduriez 相似文献
74.
75.
Efficient algorithms for optimistic crash recovery 总被引:1,自引:0,他引:1
Summary Recovery from transient processor failures can be achieved by using optimistic message logging and checkpointing. The faulty processorsroll back, and some/all of the non-faulty processors also may have to roll back. This paper formulates the rollback problem as a closure problem. A centralized closure algorithm is presented together with two efficient distributed implementations. Several related problems are also considered and distributed algorithms are presented for solving them.
S. Venkatesan received the B. Tech. and M. Tech degrees from the Indian Institute of Technology, Madras in 1981 and 1983, respectively and the M.S. and Ph.D. degrees in Computer Science from the University of Pittsburgh in 1985 and 1988. He joined the University of Texas at Dallas in January 1989, where he is currently an Assistant Professor of Computer Science. His research interests are in fault-tolerant distributed systems, distributed algorithms, testing and debugging distributed programs, fault-tolerant telecommunication networks, and mobile computing.
Tony Tony-Ying Juang is an Associate Professor of Computer Science at the Chung-Hwa Polytechnic Institute. He received the B.S. degree in Naval Architecture from the National Taiwan University in 1983 and his M.S. and Ph.D. degrees in Computer Science from the University of Texas at Dallas in 1989 and 1992, respectively. His research interests include distributed algorithms, fault-tolerant distributed computing, distributed operating systems and computer communications.This research was supported in part by NSF under Grant No. CCR-9110177 and by the Texas Advanced Technology Program under Grant No. 9741-036 相似文献
76.
When solving an optimization problem with a Hopfield network, a solution is obtained after the network is relaxed to an equilibrium state. The relaxation process is an important step in achieving a solution. In this paper, a new procedure for the relaxation process is proposed. In the new procedure, the amplified signal received by a neuron from other neurons is treated as the target value for its activation (output) value. The activation of a neuron is updated directly based on the difference between its current activation and the received target value, without using the updating of the input value as an intermediate step. A relaxation rate is applied to control the updating scale for a smooth relaxation process. The new procedure is evaluated and compared with the original procedure in the Hopfield network through simulations based on 200 randomly generated instances of the 10-city traveling salesman problem. The new procedure reduces the error rate by 34.6% and increases the percentage of valid tours by 194.6% as compared with the original procedure. 相似文献
77.
Benchmarking Least Squares Support Vector Machine Classifiers 总被引:16,自引:0,他引:16
van Gestel Tony Suykens Johan A.K. Baesens Bart Viaene Stijn Vanthienen Jan Dedene Guido de Moor Bart Vandewalle Joos 《Machine Learning》2004,54(1):5-32
In Support Vector Machines (SVMs), the solution of the classification problem is characterized by a (convex) quadratic programming (QP) problem. In a modified version of SVMs, called Least Squares SVM classifiers (LS-SVMs), a least squares cost function is proposed so as to obtain a linear set of equations in the dual space. While the SVM classifier has a large margin interpretation, the LS-SVM formulation is related in this paper to a ridge regression approach for classification with binary targets and to Fisher's linear discriminant analysis in the feature space. Multiclass categorization problems are represented by a set of binary classifiers using different output coding schemes. While regularization is used to control the effective number of parameters of the LS-SVM classifier, the sparseness property of SVMs is lost due to the choice of the 2-norm. Sparseness can be imposed in a second stage by gradually pruning the support value spectrum and optimizing the hyperparameters during the sparse approximation procedure. In this paper, twenty public domain benchmark datasets are used to evaluate the test set performance of LS-SVM classifiers with linear, polynomial and radial basis function (RBF) kernels. Both the SVM and LS-SVM classifier with RBF kernel in combination with standard cross-validation procedures for hyperparameter selection achieve comparable test set performances. These SVM and LS-SVM performances are consistently very good when compared to a variety of methods described in the literature including decision tree based algorithms, statistical algorithms and instance based learning methods. We show on ten UCI datasets that the LS-SVM sparse approximation procedure can be successfully applied. 相似文献
78.
One of the main goals of an applied research field such as software engineering is the transfer and widespread use of research
results in industry. To impact industry, researchers developing technologies in academia need to provide tangible evidence
of the advantages of using them. This can be done trough step-wise validation, enabling researchers to gradually test and
evaluate technologies to finally try them in real settings with real users and applications. The evidence obtained, together
with detailed information on how the validation was conducted, offers rich decision support material for industry practitioners
seeking to adopt new technologies and researchers looking for an empirical basis on which to build new or refined technologies.
This paper presents model for evaluating the rigor and industrial relevance of technology evaluations in software engineering.
The model is applied and validated in a comprehensive systematic literature review of evaluations of requirements engineering
technologies published in software engineering journals. The aim is to show the applicability of the model and to characterize
how evaluations are carried out and reported to evaluate the state-of-research. The review shows that the model can be applied
to characterize evaluations in requirements engineering. The findings from applying the model also show that the majority
of technology evaluations in requirements engineering lack both industrial relevance and rigor. In addition, the research
field does not show any improvements in terms of industrial relevance over time. 相似文献
79.
Tony Gill Kasper Johansen Stuart Phinn Rebecca Trevithick Peter Scarth John Armston 《International journal of remote sensing》2017,38(3):679-705
There is a significant need to provide nationwide consistent information for land managers and scientists to assist with property planning, vegetation monitoring applications, risk assessment, and conservation activities at an appropriate spatial scale. We created maps of woody vegetation cover of Australia using a consistent method applied across the continent, and made them accessible. We classified pixels as woody or not woody, quantified their foliage projective cover, and classed them as forest or other wooded lands based on their cover density. The maps provide, for the first time, cover density estimates of Australian forests and other wooded lands with the spatial detail required for local-scale studies. The maps were created by linking field data, collected by a network of collaborators across the continent, to a time series of Landsat-5 TM and Landsat-7 ETM+ images for the period 2000–2010. The fractions of green vegetation cover, non-green vegetation cover, and bare ground were calculated for each pixel using a previously developed spectral unmixing approach. Time series statistics, for the green vegetation cover, were used to classify each pixel as either woody or not using a random forest classifier. An estimate of woody foliage projective cover was made by calibration with field measurements, and woody pixels classified as forest where the foliage cover was at least 0.1. Validation of the foliage projective cover with field measurements gave a coefficient of determination, R2,of 0.918 and root mean square error of 0.070. The user’s and producer’s accuracies for areas mapped as forest were high at 92.2% and 95.9%, respectively. The user’s and producers’s accuracies were lower for other wooded lands at 75.7% and 61.3%, respectively. Further research into methods to better separate areas with sparse woody vegetation from those without woody vegetation is needed. The maps provide information that will assist in gaining a better understanding of our natural environment. Applications range from the continental-scale activity of estimating national carbon stocks, to the local scale activities of assessing habitat suitability and property planning. 相似文献
80.
Muhammad Yasir Martin Purvis Maryam Purvis Bastin Tony Roy Savarimuthu 《Computational Intelligence》2018,34(2):679-712
In recent years, the notion of electrical energy microgrids (MGs), in which communities share their locally generated power, has gained increasing interest. Typically, the energy generated comes from renewable resources, which means that its availability is variable, ie, sometimes there may be energy surpluses and at other times energy deficits. This energy variability can be ameliorated by trading energy with a connected electricity grid. However, since main electricity grids are subject to faults or other outages, it can be advantageous for energy MGs to form coalitions and share their energy among themselves. In this work, we present our model for the dynamic formation of such MG coalitions. In our model, MGs form coalitions on the basis of complementary weather patterns. Our agent‐based model, which is scalable and affords autonomy among the MGs participating in the coalition (agents can join and depart from coalitions at any time), features methods to reduce overall “discomfort” so that, even when all participating MGs in a coalition experience deficits, they can share energy so that their overall discomfort is reduced. We demonstrate the efficacy of our model by showing empirical studies conducted with real energy production and consumption data. 相似文献