全文获取类型
收费全文 | 39152篇 |
免费 | 1491篇 |
国内免费 | 61篇 |
专业分类
电工技术 | 406篇 |
综合类 | 30篇 |
化学工业 | 7456篇 |
金属工艺 | 941篇 |
机械仪表 | 799篇 |
建筑科学 | 2045篇 |
矿业工程 | 198篇 |
能源动力 | 1126篇 |
轻工业 | 3088篇 |
水利工程 | 450篇 |
石油天然气 | 120篇 |
武器工业 | 5篇 |
无线电 | 2754篇 |
一般工业技术 | 6521篇 |
冶金工业 | 7587篇 |
原子能技术 | 302篇 |
自动化技术 | 6876篇 |
出版年
2023年 | 209篇 |
2022年 | 368篇 |
2021年 | 718篇 |
2020年 | 481篇 |
2019年 | 647篇 |
2018年 | 806篇 |
2017年 | 721篇 |
2016年 | 868篇 |
2015年 | 776篇 |
2014年 | 1086篇 |
2013年 | 2520篇 |
2012年 | 1744篇 |
2011年 | 2189篇 |
2010年 | 1737篇 |
2009年 | 1641篇 |
2008年 | 1884篇 |
2007年 | 1873篇 |
2006年 | 1693篇 |
2005年 | 1503篇 |
2004年 | 1241篇 |
2003年 | 1186篇 |
2002年 | 1107篇 |
2001年 | 758篇 |
2000年 | 616篇 |
1999年 | 650篇 |
1998年 | 861篇 |
1997年 | 742篇 |
1996年 | 687篇 |
1995年 | 650篇 |
1994年 | 599篇 |
1993年 | 610篇 |
1992年 | 543篇 |
1991年 | 329篇 |
1990年 | 450篇 |
1989年 | 430篇 |
1988年 | 344篇 |
1987年 | 386篇 |
1986年 | 346篇 |
1985年 | 451篇 |
1984年 | 446篇 |
1983年 | 345篇 |
1982年 | 323篇 |
1981年 | 313篇 |
1980年 | 299篇 |
1979年 | 295篇 |
1978年 | 274篇 |
1977年 | 284篇 |
1976年 | 278篇 |
1975年 | 217篇 |
1974年 | 201篇 |
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
991.
The implementation and testing of XtalOpt, an evolutionary algorithm for crystal structure prediction, is outlined. We present our new periodic displacement (ripple) operator which is ideally suited to extended systems. It is demonstrated that hybrid operators, which combine two pure operators, reduce the number of duplicate structures in the search. This allows for better exploration of the potential energy surface of the system in question, while simultaneously zooming in on the most promising regions. A continuous workflow, which makes better use of computational resources as compared to traditional generation based algorithms, is employed. Various parameters in XtalOpt are optimized using a novel benchmarking scheme. XtalOpt is available under the GNU Public License, has been interfaced with various codes commonly used to study extended systems, and has an easy to use, intuitive graphical interface.
Program summary
Program title:XtalOptCatalogue identifier: AEGX_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGX_v1_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: GPL v2.1 or later [1]No. of lines in distributed program, including test data, etc.: 36 849No. of bytes in distributed program, including test data, etc.: 1 149 399Distribution format: tar.gzProgramming language: C++Computer: PCs, workstations, or clustersOperating system: LinuxClassification: 7.7External routines: QT [2], OpenBabel [3], AVOGADRO [4], SPGLIB [8] and one of: VASP [5], PWSCF [6], GULP [7].Nature of problem: Predicting the crystal structure of a system from its stoichiometry alone remains a grand challenge in computational materials science, chemistry, and physics.Solution method: Evolutionary algorithms are stochastic search techniques which use concepts from biological evolution in order to locate the global minimum on their potential energy surface. Our evolutionary algorithm, XtalOpt, is freely available to the scientific community for use and collaboration under the GNU Public License.Running time: User dependent. The program runs until stopped by the user.References:[1]
http://www.gnu.org/licenses/gpl.html. [2]
http://www.trolltech.com/. [3]
http://openbabel.org/. [4]
http://avogadro.openmolecules.net. [5]
http://cms.mpi.univie.ac.at/vasp. [6]
http://www.quantum-espresso.org. [7]
https://www.ivec.org/gulp. [8]
http://spglib.sourceforge.net.
992.
Using Wang–Landau sampling with suitable Monte Carlo trial moves (pull moves and bond-rebridging moves combined) we have determined the density of states and thermodynamic properties for a short sequence of the HP protein model. For free chains these proteins are known to first undergo a collapse “transition” to a globule state followed by a second “transition” into a native state. When placed in the proximity of an attractive surface, there is a competition between surface adsorption and folding that leads to an intriguing sequence of “transitions”. These transitions depend upon the relative interaction strengths and are largely inaccessible to “standard” Monte Carlo methods. 相似文献
993.
Map information for drivers is usually presented in an allocentric-topographic form (as with printed maps) or in an egocentric-schematic form (as with road signs). The advent of new variable message boards on UK motorways raises the possibility of presenting road maps to reflect congestion ahead. Should these maps be allocentric-topographic or egocentric-schematic? This was assessed in an eye tracking study, with participants viewing maps of a motorway network in order to identify whether any congestion was relevant to their intended route. The schematic-egocentric maps were responded to most accurately with shorter fixation durations suggesting easier processing. In particular, the driver's entrance and intended exit from the map were attended to more in the allocentric maps. Individual differences in mental rotation ability also seem to contribute to poor performance on allocentric maps. The results favour schematic-egocentric maps for roadside congestion information, but also provide theoretical insights into map-rotation and individual differences. Statement of Relevance: This study informs designers and policy makers about optimum representations of traffic congestion on roadside variable message signs and, furthermore, demonstrates that individual differences contribute to problems with processing certain sign types. Schematic-egocentric representations of a motorway network produced the best results, as noted in behavioural and eye movement measures. 相似文献
994.
The problem tackled in this article consists in associating perceived objects detected at a certain time with known objects previously detected, knowing uncertain and imprecise information regarding the association of each perceived objects with each known objects. For instance, this problem can occur during the association step of an obstacle tracking process, especially in the context of vehicle driving aid. A contribution in the modeling of this association problem in the belief function framework is introduced. By interpreting belief functions as weighted opinions according to the Transferable Belief Model semantics, pieces of information regarding the association of known objects and perceived objects can be expressed in a common global space of association to be combined by the conjunctive rule of combination, and a decision making process using the pignistic transformation can be made. This approach is validated on real data. 相似文献
995.
Jesse Kamp Anup Rao Salil Vadhan David Zuckerman 《Journal of Computer and System Sciences》2011,77(1):191-220
We give polynomial-time, deterministic randomness extractors for sources generated in small space, where we model space s sources on as sources generated by width branching programs. Specifically, there is a constant such that for any , our algorithm extracts bits that are exponentially close to uniform (in variation distance) from space s sources with min-entropy δn, where . Previously, nothing was known for , even for space 0. Our results are obtained by a reduction to the class of total-entropy independent sources. This model generalizes both the well-studied models of independent sources and symbol-fixing sources. These sources consist of a set of r independent smaller sources over , where the total min-entropy over all the smaller sources is k. We give deterministic extractors for such sources when k is as small as , for small enough ?. 相似文献
996.
The National Land Cover Database (NLCD) 2001 Alaska land cover classification is the first 30-m resolution land cover product available covering the entire state of Alaska. The accuracy assessment of the NLCD 2001 Alaska land cover classification employed a geographically stratified three-stage sampling design to select the reference sample of pixels. Reference land cover class labels were determined via fixed wing aircraft, as the high resolution imagery used for determining the reference land cover classification in the conterminous U.S. was not available for most of Alaska. Overall thematic accuracy for the Alaska NLCD was 76.2% (s.e. 2.8%) at Level II (12 classes evaluated) and 83.9% (s.e. 2.1%) at Level I (6 classes evaluated) when agreement was defined as a match between the map class and either the primary or alternate reference class label. When agreement was defined as a match between the map class and primary reference label only, overall accuracy was 59.4% at Level II and 69.3% at Level I. The majority of classification errors occurred at Level I of the classification hierarchy (i.e., misclassifications were generally to a different Level I class, not to a Level II class within the same Level I class). Classification accuracy was higher for more abundant land cover classes and for pixels located in the interior of homogeneous land cover patches. 相似文献
997.
Hong-Quang NguyenAuthor Vitae David TaniarAuthor Vitae 《Journal of Systems and Software》2011,84(1):63-76
Schema integration aims to create a mediated schema as a unified representation of existing heterogeneous sources sharing a common application domain. These sources have been increasingly written in XML due to its versatility and expressive power. Unfortunately, these sources often use different elements and structures to express the same concepts and relations, thus causing substantial semantic and structural conflicts. Such a challenge impedes the creation of high-quality mediated schemas and has not been adequately addressed by existing integration methods. In this paper, we propose a novel method, named XINTOR, for automating the integration of heterogeneous schemas. Given a set of XML sources and a set of correspondences between the source schemas, our method aims to create a complete and minimal mediated schema: it completely captures all of the concepts and relations in the sources without duplication, provided that the concepts do not overlap. Our contributions are fourfold. First, we resolve structural conflicts inherent in the source schemas. Second, we introduce a new statistics-based measure, called path cohesion, for selecting concepts and relations to be a part of the mediated schema. The path cohesion is statistically computed based on multiple path quality dimensions such as average path length and path frequency. Third, we resolve semantic conflicts by augmenting the semantics of similar concepts with context-dependent information. Finally, we propose a novel double-layered mediated schema to retain a wider range of concepts and relations than existing mediated schemas, which are at best either complete or minimal, but not both. Performed on both real and synthetic datasets, our experimental results show that XINTOR outperforms existing methods with respect to (i) the mediated-schema quality using precision, recall, F-measure, and schema minimality; and (ii) the execution performance based on execution time and scale-up performance. 相似文献
998.
We introduce a fully automatic algorithm which optimizes the high‐level structure of a given quadrilateral mesh to achieve a coarser quadrangular base complex. Such a topological optimization is highly desirable, since state‐of‐the‐art quadrangulation techniques lead to meshes which have an appropriate singularity distribution and an anisotropic element alignment, but usually they are still far away from the high‐level structure which is typical for carefully designed meshes manually created by specialists and used e.g. in animation or simulation. In this paper we show that the quality of the high‐level structure is negatively affected by helical configurations within the quadrilateral mesh. Consequently we present an algorithm which detects helices and is able to remove most of them by applying a novel grid preserving simplification operator (GP‐operator) which is guaranteed to maintain an all‐quadrilateral mesh. Additionally it preserves the given singularity distribution and in particular does not introduce new singularities. For each helix we construct a directed graph in which cycles through the start vertex encode operations to remove the corresponding helix. Therefore a simple graph search algorithm can be performed iteratively to remove as many helices as possible and thus improve the high‐level structure in a greedy fashion. We demonstrate the usefulness of our automatic structure optimization technique by showing several examples with varying complexity. 相似文献
999.
David Martens Jan Vanthienen Wouter Verbeke Bart BaesensAuthor vitae 《Decision Support Systems》2011,51(4):782-793
This paper proposes a complete framework to assess the overall performance of classification models from a user perspective in terms of accuracy, comprehensibility, and justifiability. A review is provided of accuracy and comprehensibility measures, and a novel metric is introduced that allows one to measure the justifiability of classification models. Furthermore, taxonomy of domain constraints is introduced, and an overview of the existing approaches to impose constraints and include domain knowledge in data mining techniques is presented. Finally, justifiability metric is applied to a credit scoring and customer churn prediction case. 相似文献
1000.
Analyses of systems that can be represented by functional responses are becoming common in many scientific disciplines. Functional regression trees (FRT) provide a methodology for modelling such systems. Recent work has focused on fitting models where the response variable is a probability density function, using a splitting criterion that is based on the sum of dissimilarities between the densities. We suggest a different criterion based on deviations of the densities from their mean. We provide motivation and justification for this criterion, and demonstrate its superior performance using an extensive simulation exercise. We discuss the computational aspects of the FRT procedure and show that substantial speed gains can be made through use of a dissimilarity matrix. Our results show that the proposed splitting criterion outperforms both the original and a splitting criterion based on Euclidean distance. Pointwise standard error curves for a predicted functional response can be generated through the fitting procedure, which we demonstrate in a case study with a forestry data set. Supplementary materials are available. 相似文献