首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   9036篇
  免费   569篇
  国内免费   21篇
电工技术   125篇
综合类   18篇
化学工业   2339篇
金属工艺   171篇
机械仪表   207篇
建筑科学   321篇
矿业工程   36篇
能源动力   223篇
轻工业   402篇
水利工程   68篇
石油天然气   28篇
无线电   758篇
一般工业技术   2288篇
冶金工业   981篇
原子能技术   78篇
自动化技术   1583篇
  2023年   153篇
  2022年   374篇
  2021年   418篇
  2020年   285篇
  2019年   246篇
  2018年   301篇
  2017年   255篇
  2016年   349篇
  2015年   297篇
  2014年   438篇
  2013年   550篇
  2012年   547篇
  2011年   623篇
  2010年   424篇
  2009年   412篇
  2008年   433篇
  2007年   416篇
  2006年   324篇
  2005年   252篇
  2004年   223篇
  2003年   201篇
  2002年   192篇
  2001年   122篇
  2000年   113篇
  1999年   119篇
  1998年   193篇
  1997年   154篇
  1996年   105篇
  1995年   83篇
  1994年   73篇
  1993年   90篇
  1992年   53篇
  1991年   50篇
  1990年   50篇
  1989年   37篇
  1988年   44篇
  1987年   40篇
  1986年   34篇
  1985年   42篇
  1984年   37篇
  1983年   32篇
  1982年   27篇
  1981年   27篇
  1980年   23篇
  1979年   19篇
  1978年   28篇
  1977年   24篇
  1976年   60篇
  1975年   27篇
  1971年   17篇
排序方式: 共有9626条查询结果,搜索用时 15 毫秒
151.
Surface deterioration of concrete subjected to freezing and thawing in combination with deicing salts is one of the most important factors determining the durability of concrete infrastructure in cold climates. The freeze–thaw deicing salt (FTDS) resistance of cementitious materials can be determined by the capillary suction of de-icing chemicals and freeze–thaw (CDF) test. Specimens are subjected to repeated freeze–thaw cycles with simultaneous addition of deicing salt and the amount of material scaled off near the surface is determined. For concretes with adequate FTDS resistance, this test method works very well. However, specimens with unknown performance often experience increased edge scaling. This leads to a falsification of results and consequently to an underestimation of the actual freeze–thaw resistance. In materials research, however, concretes with high levels of surface deterioration are studied in order to investigate various factors of influence on the freeze–thaw resistance of concretes in a targeted manner. This article presents a novel methodology that delivers new information regarding surface deterioration of CDF samples using high-resolution 3D scan data. Change of volume is used to support deterioration results of the standard CDF methodology. Increase of surface area is used to estimate change in roughness of samples.  相似文献   
152.
The purpose of the study was to examine the effects of manipulating lung volume (LV) on phonatory and articulatory kinematic behavior during sentence production in healthy adults. Five men and five women repeated the sentence "I sell a sapapple again" under five LV conditions. These included (1) speaking normally, (2) speaking after exhaling most of the air from the lungs, (3) speaking at end expiratory level (EEL), (4) speaking after a maximal inhalation, and (5) speaking after a maximal inhalation while attempting to maintain as normal a mode of speech as possible. From a multichannel recording, measures were made of LV, sound pressure level (SPL), fundamental frequency (F0) and semitone standard deviation (STSD), and upper and lower lip displacements and peak velocities. When compared with the reference condition, the sentence was spoken significantly more quickly at the lowest LV. SPL increased significantly for the high LV condition, as did the women's F0 and STSD. Upper lip displacements and peak velocities generally decreased for LVs other than the reference condition. Lower lip movements showed inconsistent changes as a function of LV. Adjustments to the LV for speech led to SPL and F0 changes consistent with a coordinated control of the respiratory system and the larynx. However, less consistent effects were observed in the articulatory kinematic measures, possibly because of a less direct biomechanical and neural control linkage between respiratory and articulatory structures.  相似文献   
153.
We consider the problem of determining when two dataflow networks with uninterpreted nodes always have the same input-output behavior. We define a set of behavior-preserving transformations on networks and show that this set is “schematologically complete”; i.e., networks have the same input-output behavior under all interpretations if and only if they can be transformed into isomorphic networks. As a by product, we obtain a polynomial algorithm for deciding schematological equivalence of dataflow networks.  相似文献   
154.
Infrastructure federation is becoming an increasingly important issue for modern Distributed Computing Infrastructures (DCIs): Dynamic elasticity of quasi-static Grid environments, incorporation of special-purpose resources into commoditized Cloud infrastructures, cross-community collaboration for increasingly diverging areas of modern e-Science, and Cloud Bursting pose major challenges on the technical level for many resource and middleware providers. Especially with respect to increasing costs of operating data centers, the intelligent yet automated and secure sharing of resources is a key factor for success. With the D-Grid Scheduler Interoperability (DGSI) project within the German D-Grid Initiative, we provide a strategic technology for the automatically negotiated, SLA-secured, dynamically provisioned federation of resources and services for Grid-and Cloud-type infrastructures. This goal is achieved by complementing current DCI schedulers with the ability to federate infrastructure for the temporary leasing of resources and rechanneling of workloads. In this work, we describe the overall architecture and SLA-secured negotiation protocols within DGSI and depict an advanced mechanism for resource delegation through means of dynamically provisioned, virtualized middleware. Through this methodology, we provide the technological foundation for intelligent capacity planning and workload management in a cross-infrastructure fashion.  相似文献   
155.
156.
Classifiers based on radial basis function neural networks have a number of useful properties that can be exploited in many practical applications. Using sample data, it is possible to adjust their parameters (weights), to optimize their structure, and to select appropriate input features (attributes). Moreover, interpretable rules can be extracted from a trained classifier and input samples can be identified that cannot be classified with a sufficient degree of “certainty”. These properties support an analysis of radial basis function classifiers and allow for an adaption to “novel” kinds of input samples in a real-world application. In this article, we outline these properties and show how they can be exploited in the field of intrusion detection (detection of network-based misuse). Intrusion detection plays an increasingly important role in securing computer networks. In this case study, we first compare the classification abilities of radial basis function classifiers, multilayer perceptrons, the neuro-fuzzy system NEFCLASS, decision trees, classifying fuzzy-k-means, support vector machines, Bayesian networks, and nearest neighbor classifiers. Then, we investigate the interpretability and understandability of the best paradigms found in the previous step. We show how structure optimization and feature selection for radial basis function classifiers can be done by means of evolutionary algorithms and compare this approach to decision trees optimized using certain pruning techniques. Finally, we demonstrate that radial basis function classifiers are basically able to detect novel attack types. The many advantageous properties of radial basis function classifiers could certainly be exploited in other application fields in a similar way.  相似文献   
157.
The rapidly increasing complexity of multi-body system models in applications like vehicle dynamics, robotics and bio-mechanics requires qualitative new solution methods to slash computing times for the dynamical simulation.  相似文献   
158.
Numerous numerical methods have been developed in an effort to accurately predict stresses in bones. The largest group are variants of the h-version of the finite element method (h-FEM), where low order Ansatz functions are used. By contrast, we3 investigate a combination of high order FEM and a fictitious domain approach, the finite cell method (FCM). While the FCM has been verified and validated in previous publications, this article proposes methods on how the FCM can be made computationally efficient to the extent that it can be used for patient specific, interactive bone simulations. This approach is called computational steering and allows to change input parameters like the position of an implant, material or loads and leads to an almost instantaneous change in the output (stress lines, deformations). This direct feedback gives the user an immediate impression of the impact of his actions to an extent which, otherwise, is hard to obtain by the use of classical non interactive computations. Specifically, we investigate an application to pre-surgical planning of a total hip replacement where it is desirable to select an optimal implant for a specific patient. Herein, optimal is meant in the sense that the expected post-operative stress distribution in the bone closely resembles that before the operation.  相似文献   
159.
Enabling fast and detailed insights over large portions of source code is an important task in a global development ecosystem. Numerous data structures have been developed to store source code and to support various structural queries, to help in navigation, evaluation and analysis. Many of these data structures work with tree-based or graph-based representations of source code. The goal of this project is to elaborate a data storage that enables efficient storing and fast querying of structural information. The naive adjacency list method has been enhanced with the use of recent data compression approaches for column-oriented databases to allow no-loss albeit compact storage of fine-grained structural data. The graph indexing has enabled the proposed data model to expeditiously answer fine-grained structural queries. This paper describes the basics of the proposed approach and illustrates its technical feasibility.  相似文献   
160.
We present and analyze an unsupervised method for Word Sense Disambiguation (WSD). Our work is based on the method presented by McCarthy et al. in 2004 for finding the predominant sense of each word in the entire corpus. Their maximization algorithm allows weighted terms (similar words) from a distributional thesaurus to accumulate a score for each ambiguous word sense, i.e., the sense with the highest score is chosen based on votes from a weighted list of terms related to the ambiguous word. This list is obtained using the distributional similarity method proposed by Lin Dekang to obtain a thesaurus. In the method of McCarthy et al., every occurrence of the ambiguous word uses the same thesaurus, regardless of the context where the ambiguous word occurs. Our method accounts for the context of a word when determining the sense of an ambiguous word by building the list of distributed similar words based on the syntactic context of the ambiguous word. We obtain a top precision of 77.54% of accuracy versus 67.10% of the original method tested on SemCor. We also analyze the effect of the number of weighted terms in the tasks of finding the Most Frecuent Sense (MFS) and WSD, and experiment with several corpora for building the Word Space Model.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号