全文获取类型
收费全文 | 7019篇 |
免费 | 198篇 |
专业分类
电工技术 | 97篇 |
综合类 | 2篇 |
化学工业 | 979篇 |
金属工艺 | 100篇 |
机械仪表 | 106篇 |
建筑科学 | 235篇 |
矿业工程 | 22篇 |
能源动力 | 138篇 |
轻工业 | 582篇 |
水利工程 | 77篇 |
石油天然气 | 17篇 |
无线电 | 733篇 |
一般工业技术 | 1018篇 |
冶金工业 | 2145篇 |
原子能技术 | 44篇 |
自动化技术 | 922篇 |
出版年
2022年 | 50篇 |
2021年 | 94篇 |
2020年 | 68篇 |
2019年 | 117篇 |
2018年 | 125篇 |
2017年 | 83篇 |
2016年 | 101篇 |
2015年 | 85篇 |
2014年 | 144篇 |
2013年 | 315篇 |
2012年 | 221篇 |
2011年 | 277篇 |
2010年 | 207篇 |
2009年 | 193篇 |
2008年 | 271篇 |
2007年 | 249篇 |
2006年 | 236篇 |
2005年 | 193篇 |
2004年 | 160篇 |
2003年 | 149篇 |
2002年 | 151篇 |
2001年 | 143篇 |
2000年 | 113篇 |
1999年 | 148篇 |
1998年 | 501篇 |
1997年 | 330篇 |
1996年 | 219篇 |
1995年 | 195篇 |
1994年 | 182篇 |
1993年 | 179篇 |
1992年 | 99篇 |
1991年 | 76篇 |
1990年 | 100篇 |
1989年 | 100篇 |
1988年 | 76篇 |
1987年 | 79篇 |
1986年 | 69篇 |
1985年 | 87篇 |
1984年 | 80篇 |
1983年 | 66篇 |
1982年 | 54篇 |
1981年 | 50篇 |
1980年 | 54篇 |
1979年 | 61篇 |
1978年 | 56篇 |
1977年 | 77篇 |
1976年 | 158篇 |
1975年 | 51篇 |
1974年 | 38篇 |
1973年 | 38篇 |
排序方式: 共有7217条查询结果,搜索用时 31 毫秒
131.
An extension to the divide-and-conquer algorithm (DCA) is presented in this paper to model constrained multibody systems. The constraints of interest are those applied to the system due to the inverse dynamics or control laws rather than the kinematically closed loops which have been studied in the literature. These imposed constraints are often expressed in terms of the generalized coordinates and speeds. A set of unknown generalized constraint forces must be considered in the equations of motion to enforce these algebraic constraints. In this paper dynamics of this class of multibody constrained systems is formulated using a Generalized-DCA. In this scheme, introducing dynamically equivalent forcing systems, each generalized constraint force is replaced by its dynamically equivalent spatial constraint force applied from the appropriate parent body to the associated child body at the connecting joint without violating the dynamics of the original system. The handle equations of motion are then formulated considering these dynamically equivalent spatial constraint forces. These equations in the GDCA scheme are used in the assembly and disassembly processes to solve for the states of the system, as well as the generalized constraint forces and/or Lagrange multipliers. 相似文献
132.
Data interoperability between computer-aided design (CAD) systems remains a major obstacle in the information integration and exchange in a collaborative engineering environment. The use of CAD data exchange standards causes the loss of design intent such as construction history, features, parameters, and constraints, whereas existing research on feature-based data exchange only focuses on class-level feature definitions and does not support instance-level verification, which causes ambiguity in data exchange. In this paper, a hybrid ontology approach is proposed to allow for the full exchange of both feature definition semantics and geometric construction data. A shared base ontology is used to convey the most fundamental elements of CAD systems for geometry and topology, which is to both maximize flexibility and minimize information loss. A three-branch hybrid CAD feature model that includes feature operation information at the boundary representation level is constructed. Instance-level feature information in the form of the base ontology is then translated to local ontologies of individual CAD systems during the rule-based mapping and verification process. A combination of the Ontology Web Language (OWL) and Semantic Web Rule Language (SWRL) is used to represent feature classes and properties and automatically classify them by a reasoner in the target system, which requires no knowledge about the source system. 相似文献
133.
Anderson da Silva Soares Telma Woerle de Lima Daniel Vitor de LuPcena Rogerio Lopes Salvini GustavoTeodoro Laureano Clarimar Jose Coelho 《计算机技术与应用:英文》2013,(9):466-475
The multiple determination tasks of chemical properties are a classical problem in analytical chemistry. The major problem is concerned in to find the best subset of variables that better represents the compounds. These variables are obtained by a spectrophotometer device. This device measures hundreds of correlated variables related with physicocbemical properties and that can be used to estimate the component of interest. The problem is the selection of a subset of informative and uncorrelated variables that help the minimization of prediction error. Classical algorithms select a subset of variables for each compound considered. In this work we propose the use of the SPEA-II (strength Pareto evolutionary algorithm II). We would like to show that the variable selection algorithm can selected just one subset used for multiple determinations using multiple linear regressions. For the case study is used wheat data obtained by NIR (near-infrared spectroscopy) spectrometry where the objective is the determination of a variable subgroup with information about E protein content (%), test weight (Kg/HI), WKT (wheat kernel texture) (%) and farinograph water absorption (%). The results of traditional techniques of multivariate calibration as the SPA (successive projections algorithm), PLS (partial least square) and mono-objective genetic algorithm are presents for comparisons. For NIR spectral analysis of protein concentration on wheat, the number of variables selected from 775 spectral variables was reduced for just 10 in the SPEA-II algorithm. The prediction error decreased from 0.2 in the classical methods to 0.09 in proposed approach, a reduction of 37%. The model using variables selected by SPEA-II had better prediction performance than classical algorithms and full-spectrum partial least-squares. 相似文献
134.
Seasonal temperature and bioenergetic models were coupled to explore the impacts on juvenile salmonid growth of possible climate‐induced changes to mean annual water temperature and snowpack in four characteristic ecoregions. Increasing mean temperature increases juvenile growth in streams that currently experience cool spring temperatures. In streams with currently warm spring temperatures, an increase shortens the duration of optimal conditions and truncates growth. A loss of snow enhances growth in cool‐summer streams and decreases growth in warm‐summer streams. The relative impacts of such climate change trends will vary significantly across ecoregions. Copyright © 2010 John Wiley & Sons, Ltd. 相似文献
135.
We present an unbiased method for generating caustic lighting using importance sampled Path Tracing with Caustic Forecasting. Our technique is part of a straightforward rendering scheme which extends the Illumination by Weak Singularities method to allow for fully unbiased global illumination with rapid convergence. A photon shooting preprocess, similar to that used in Photon Mapping, generates photons that interact with specular geometry. These photons are then clustered, effectively dividing the scene into regions which will contribute similar amounts of caustic lighting to the image. Finally, the photons are stored into spatial data structures associated with each cluster, and the clusters themselves are organized into a spatial data structure for fast searching. During rendering we use clusters to decide the caustic energy importance of a region, and use the local photons to aid in importance sampling, effectively reducing the number of samples required to capture caustic lighting. 相似文献
136.
Zhen He X. Sean Wang Byung Suk Lee Alan C. H. Ling 《Knowledge and Information Systems》2008,15(1):31-54
Recently, periodic pattern mining from time series data has been studied extensively. However, an interesting type of periodic
pattern, called partial periodic (PP) correlation in this paper, has not been investigated. An example of PP correlation is
that power consumption is high either on Monday or Tuesday but not on both days. In general, a PP correlation is a set of
offsets within a particular period such that the data at these offsets are correlated with a certain user-desired strength.
In the above example, the period is a week (7 days), and each day of the week is an offset of the period. PP correlations
can provide insightful knowledge about the time series and can be used for predicting future values. This paper introduces
an algorithm to mine time series for PP correlations based on the principal component analysis (PCA) method. Specifically,
given a period, the algorithm maps the time series data to data points in a multidimensional space, where the dimensions correspond
to the offsets within the period. A PP correlation is then equivalent to correlation of data when projected to a subset of
the dimensions. The algorithm discovers, with one sequential scan of data, all those PP correlations (called minimum PP correlations)
that are not unions of some other PP correlations. Experiments using both real and synthetic data sets show that the PCA-based
algorithm is highly efficient and effective in finding the minimum PP correlations.
Zhen He is a lecturer in the Department of Computer Science at La Trobe University. His main research areas are database systems
optimization, time series mining, wireless sensor networks, and XML information retrieval. Prior to joining La Trobe University,
he worked as a postdoctoral research associate in the University of Vermont. He holds Bachelors, Honors and Ph.D degrees in
Computer Science from the Australian National University.
X. Sean Wang received his Ph.D degree in Computer Science from the University of Southern California in 1992. He is currently the Dorothean
Chair Professor in Computer Science at the University of Vermont. He has published widely in the general area of databases
and information security, and was a recipient of the US National Science Foundation Research Initiation and CAREER awards.
His research interests include database systems, information security, data mining, and sensor data processing.
Byung Suk Lee is associate professor of Computer Science at the University of Vermont. His main research areas are database systems, data
modeling, and information retrieval. He held positions in industry and academia: Gold Star Electric, Bell Communications Research,
Datacom Global Communications, University of St. Thomas, and currently University of Vermont. He was also a visiting professor
at Dartmouth College and a participating guest at Lawrence Livermore National Laboratory. He served on international conferences
as a program committee member, a publicity chair, and a special session organizer, and also on US federal funding proposal
review panel. He holds a BS degree from Seoul National University, MS from Korea Advanced Institute of Science and Technology,
and Ph.D from Stanford University.
Alan C. H. Ling is an assistant professor at Department of Computer Science in University of Vermont. His research interests include combinatorial
design theory, coding theory, sequence designs, and applications of design theory. 相似文献
137.
Asimov’s “three laws of robotics” and machine metaethics 总被引:1,自引:1,他引:0
Susan Leigh Anderson 《AI & Society》2008,22(4):477-493
Using Asimov’s “Bicentennial Man” as a springboard, a number of metaethical issues concerning the emerging field of machine
ethics are discussed. Although the ultimate goal of machine ethics is to create autonomous ethical machines, this presents
a number of challenges. A good way to begin the task of making ethics computable is to create a program that enables a machine
to act an ethical advisor to human beings. This project, unlike creating an autonomous ethical machine, will not require that
we make a judgment about the ethical status of the machine itself, a judgment that will be particularly difficult to make.
Finally, it is argued that Asimov’s “three laws of robotics” are an unsatisfactory basis for machine ethics, regardless of
the status of the machine.
相似文献
Susan Leigh AndersonEmail: |
138.
A dynamic workflow framework for mass customization using web service and autonomous agent techniques 总被引:3,自引:1,他引:2
Daniel J. Karpowitz Jordan J. Cox Jeffrey C. Humpherys Sean C. Warnick 《Journal of Intelligent Manufacturing》2008,19(5):537-552
Custom software development and maintenance is one of the key expenses associated with developing automated systems for mass
customization. This paper presents a method for reducing the risk associated with this expense by developing a flexible environment
for determining and executing dynamic workflow paths. Strategies for developing an autonomous agent-based framework and for
identifying and creating web services for specific process tasks are presented. The proposed methods are outlined in two different
case studies to illustrate the approach for both a generic process with complex workflow paths and a more specific sequential
engineering process. 相似文献
139.
In this paper, we address the problem of agent loss in vehicle formations and sensor networks via two separate approaches: (1) perform a ‘self‐repair’ operation in the event of agent loss to recover desirable information architecture properties or (2) introduce robustness into the information architecture a priori such that agent loss does not destroy desirable properties. We model the information architecture as a graph G(V, E), where V is a set of vertices representing the agents and E is a set of edges representing information flow amongst the agents. We focus on two properties of the graph called rigidity and global rigidity, which are required for formation shape maintenance and sensor network self‐localization, respectively. For the self‐repair approach, we show that while previous results permit local repair involving only neighbours of the lost agent, the repair cannot always be implemented using only local information. We present new results that can be applied to make the local repair using only local information. We describe implementation and illustrate with algorithms and examples. For the robustness approach, we investigate the structure of graphs with the property that rigidity or global rigidity is preserved after removing any single vertex (we call the property as 2‐vertex‐rigidity or 2‐vertex‐global‐rigidity, respectively). Information architectures with such properties would allow formation shape maintenance or self‐localization to be performed even in the event of agent failure. We review a characterization of a class of 2‐vertex‐rigidity and develop a separate class, making significant strides towards a complete characterization. We also present a characterization of a class of 2‐vertex‐global‐rigidity. Copyright © 2008 John Wiley & Sons, Ltd. 相似文献
140.
A stability robustness test is developed for internally stable, nominal, linear time‐invariant (LTI) feedback systems subject to structured, linear time‐varying uncertainty. There exists (in the literature) a necessary and sufficient structured small gain condition that determines robust stability in such cases. In this paper, the structured small gain theorem is utilized to formulate a (sufficient) stability robustness condition in a scaled LTI ν‐gap metric framework. The scaled LTI ν‐gap metric stability condition is shown to be computable via linear matrix inequality techniques, similar to the structured small gain condition. Apart from a comparison with a generalized robust stability margin as the final part of the stability test, however, the solution algorithm implemented to test the scaled LTI ν‐gap metric stability robustness condition is shown to be independent of knowledge about the controller transfer function (as opposed to the LMI feasibility problem associated with the scaled small gain condition which is dependent on knowledge about the controller). Thus, given a nominal plant and a structured uncertainty set, the stability robustness condition presented in this paper provides a single constraint on a controller (in terms of a large enough generalized robust stability margin) that (sufficiently) guarantees to stabilize all plants in the uncertainty set. Copyright © 2008 John Wiley & Sons, Ltd. 相似文献