全文获取类型
收费全文 | 499篇 |
免费 | 24篇 |
专业分类
电工技术 | 2篇 |
化学工业 | 133篇 |
金属工艺 | 9篇 |
机械仪表 | 9篇 |
建筑科学 | 42篇 |
矿业工程 | 1篇 |
能源动力 | 6篇 |
轻工业 | 26篇 |
无线电 | 27篇 |
一般工业技术 | 94篇 |
冶金工业 | 16篇 |
原子能技术 | 12篇 |
自动化技术 | 146篇 |
出版年
2023年 | 6篇 |
2022年 | 6篇 |
2021年 | 22篇 |
2020年 | 13篇 |
2019年 | 7篇 |
2018年 | 11篇 |
2017年 | 11篇 |
2016年 | 21篇 |
2015年 | 15篇 |
2014年 | 27篇 |
2013年 | 31篇 |
2012年 | 35篇 |
2011年 | 43篇 |
2010年 | 28篇 |
2009年 | 30篇 |
2008年 | 27篇 |
2007年 | 39篇 |
2006年 | 23篇 |
2005年 | 22篇 |
2004年 | 14篇 |
2003年 | 14篇 |
2002年 | 13篇 |
2001年 | 5篇 |
2000年 | 9篇 |
1999年 | 8篇 |
1998年 | 7篇 |
1997年 | 9篇 |
1996年 | 1篇 |
1995年 | 4篇 |
1994年 | 2篇 |
1993年 | 2篇 |
1992年 | 5篇 |
1988年 | 2篇 |
1987年 | 2篇 |
1986年 | 1篇 |
1985年 | 2篇 |
1984年 | 1篇 |
1981年 | 1篇 |
1980年 | 1篇 |
1975年 | 1篇 |
1974年 | 2篇 |
排序方式: 共有523条查询结果,搜索用时 15 毫秒
391.
In this paper, we propose a new high-speed computation algorithm for solving a large N×N matrix system using the MIMD–SIMD Hybrid System. The MIMD–SIMD Hybrid System (also denoted as Hybrid System in this paper) is a new parallel architecture consisting of a combination of Cluster of Workstations (COWs) and SIMD systems working concurrently to produce an optimal parallel computation. We first introduce our prototype SIMD system and our Hybrid System setup before presenting how it can be implemented to find the unknowns in a large N×N linear matrix equation system using the Gauss–LU algorithm. This algorithm basically performs the ‘Divide and Conquer’ approach by breaking down the large N×N matrix system into a manageable 32 × 32 matrix for fast computation. 相似文献
392.
Georg Schneider Heiko Wersing Bernhard Sendhoff Edgar K?rner 《IEEE transactions on systems, man, and cybernetics. Part B, Cybernetics》2005,35(3):426-437
A major problem in designing artificial neural networks is the proper choice of the network architecture. Especially for vision networks classifying three-dimensional (3-D) objects this problem is very challenging, as these networks are necessarily large and therefore the search space for defining the needed networks is of a very high dimensionality. This strongly increases the chances of obtaining only suboptimal structures from standard optimization algorithms. We tackle this problem in two ways. First, we use biologically inspired hierarchical vision models to narrow the space of possible architectures and to reduce the dimensionality of the search space. Second, we employ evolutionary optimization techniques to determine optimal features and nonlinearities of the visual hierarchy. Here, we especially focus on higher order complex features in higher hierarchical stages. We compare two different approaches to perform an evolutionary optimization of these features. In the first setting, we directly code the features into the genome. In the second setting, in analogy to an ontogenetical development process, we suggest the new method of an indirect coding of the features via an unsupervised learning process, which is embedded into the evolutionary optimization. In both cases the processing nonlinearities are encoded directly into the genome and are thus subject to optimization. The fitness of the individuals for the evolutionary selection process is computed by measuring the network classification performance on a benchmark image database. Here, we use a nearest-neighbor classification approach, based on the hierarchical feature output. We compare the found solutions with respect to their ability to generalize. We differentiate between a first- and a second-order generalization. The first-order generalization denotes how well the vision system, after evolutionary optimization of the features and nonlinearities using a database A, can classify previously unseen test views of objects from this database A. As second-order generalization, we denote the ability of the vision system to perform classification on a database B using the features and nonlinearities optimized on database A. We show that the direct feature coding approach leads to networks with a better first-order generalization, whereas the second-order generalization is on an equally high level for both direct and indirect coding. We also compare the second-order generalization results with other state-of-the-art recognition systems and show that both approaches lead to optimized recognition systems, which are highly competitive with recent recognition algorithms. 相似文献
393.
Convex rear view mirrors increasingly replace planar mirrors in automobiles. While increasing the field of view, convex mirrors are also taken to increase distance estimates and thereby reduce safety margins. However, this study failed to replicate systematic distance estimation errors in a real world setting. Whereas distance estimates were accurate on average, convex mirrors lead to significantly more variance in distance and spacing estimations. A second experiment explored the effect of mirrors on time-to-contact estimations, which had not been previously researched. Potential effects of display size were separated from effects caused by distortion in convex mirrors. Time-to-contact estimations without a mirror were most accurate. However, not distortion, but visual angle seemed to cause estimation biases. Evaluating advantages and disadvantages of convex mirrors is far more complex than expected so far. 相似文献
394.
Sparse coding is an important approach for the unsupervised learning of sensory features. In this contribution, we present two new methods that extend the traditional sparse coding approach with supervised components. Our goal is to increase the suitability of the learned features for classification tasks while keeping most of their general representation capability. We analyze the effect of the new methods using visualization on artificial data and discuss the results on two object test sets with regard to the properties of the found feature representation. 相似文献
395.
Serge Autexier Dieter Hutter Bruno Langenstein Heiko Mantel Georg Rock Axel Schairer Werner Stephan Roland Vogt Andreas Wolpers 《International Journal on Software Tools for Technology Transfer (STTT)》2000,3(1):66-77
The Verification Support Environment (VSE) is a tool to formally specify and verify complex systems. It provides the means to structure specifications and supports
the development process from the specification of a system to the automatic generation of code. Formal developments following
the VSE method are stored and maintained in an administration system that guides the user and maintains a consistent state
of development. An integrated deduction system provides proof support for the deduction problems arising during the development
process.
We describe the application of VSE to an industrial case study and give an overview of the enhanced VSE system and the VSE
methodology. 相似文献
396.
Claudia Wenzel Heiko Maus 《International Journal on Document Analysis and Recognition》2001,3(4):248-260
Knowledge-based systems for document analysis and understanding (DAU) are quite useful whenever analysis has to deal with
the changing of free-form document types which require different analysis components. In this case, declarative modeling is
a good way to achieve flexibility. An important application domain for such systems is the business letter domain. Here, high
accuracy and the correct assignment to the right people and the right processes is a crucial success factor. Our solution
to this proposes a comprehensive knowledge-centered approach: we model not only comparatively static knowledge concerning
document properties and analysis results within the same declarative formalism, but we also include the analysis task and
the current context of the system environment within the same formalism. This allows an easy definition of new analysis tasks
and also an efficient and accurate analysis by using expectations about incoming documents as context information. The approach
described has been implemented within the VOPR (VOPR is an acronym for the Virtual Office PRototype.) system. This DAU system
gains the required context information from a commercial workflow management system (WfMS) by constant exchanges of expectations
and analysis tasks. Further interaction between these two systems covers the delivery of results from DAU to the WfMS and
the delivery of corrected results vice versa.
Received June 19, 1999 / Revised November 8, 2000 相似文献
397.
In the first part of this paper a Tool–Narayanaswamy–Moynihan-model (TNM) extended by non-Arrhenius temperature dependence of the relaxation time was applied to describe results from temperature modulated DSC (TMDSC). The model is capable to describe the features of the heat capacities measured in TMDSC scan experiments in the glass transition region of polystyrene (PS). In this part the model is applied to bisphenol A-polycarbonate (PC). Both aspects of glass transition, vitrification as well as the dynamic glass transition are again well described by the model. The dynamic glass transition above Tg can be considered as a process in thermodynamic equilibrium. The non-linearity parameter (x) of the TNM model is not needed to describe complex heat capacity as long as the dynamic glass transition is well separated from vitrification. Under such conditions the relation between cooling rate (q0), and the corresponding frequency (ω) can be found from the two independently observed glass transitions. Fictive temperature and the maximum of the imaginary part of complex heat capacity are used for comparison here. The measurement as well as the TNM-model confirm the relation derived from Donth's fluctuation approach to glass transition, ω=q0/aδT, where a=5.5±0.1 (confirmed previously experimentally as 6±3) and δT is mean temperature fluctuation of the cooperatively rearranging regions (CRRs). 相似文献
398.
To characterize parameters influencing the antioxidant activity at interfaces a novel ESR approach was developed, which facilitates
the investigation of the reaction stoichiometry of antioxidants towards stable radicals. To relate the activity of antioxidants
towards the location of radicals at interfaces NMR experiments were conducted. Micellar solutions of SDS, Brij and CTAB were
used to model interfaces of different chemical nature. The hydrophilic Fremy’s radical was found to be solubilized exclusively
in the aqueous phase of SDS micellar solution but partitioned partly into the hydrophilic headgroup area of Brij micelles.
In contrast the hydrophobic galvinoxyl was exclusively located in the micellar phase with the increasing depth of intercalation
in the order SDS < Brij < CTAB. Gallates revealed a higher stoichiometric factor towards galvinoxyl in CTAB systems, which
is accounted to a concentration effect of antioxidant and radical being both solubilized in the palisade layer. In contrast,
in SDS solutions hardly any reaction between galvinoxyl and gallates was found. SDS acted as a physical barrier between radical
(palisade layer) and antioxidant (stern layer). The influence of the hydrophobic properties of the antioxidant was clearly
seen in Brij micelles. Elongation of the alkyl chain in gallate molecule resulted in increasing stoichiometric factors in
the presence of galvinoxyl being located in the deeper region of the bulky headgroup area. The reverse trend was found in
the presence of Fremy’s radical being located in the hydrated area of the micelles. 相似文献
399.
400.
Dyslipidemia is a pathological alteration of serum lipid levels. The most common forms are either elevations of triglycerides or low density lipoprotein cholesterol associated with a reduction of high density lipoprotein cholesterol. Most frequently both forms of lipid disorders are combined. Elevations of free fatty acid blood levels are commonly not subsumed under the term dyslipidemia. However, free fatty acids should also be considered, as they are frequently associated with dyslipidemia and represent a risk factor for cardiovascular diseases. Dyslipidemias are among the major etiologic factors for arterial occlusive diseases. Resulting in fatal implications such as stroke and coronary heart disease, dyslipidemias contribute to the most prevalent causes of death. Lowering of low density lipoprotein and raising of high density lipoprotein cholesterol levels have been shown in both epidemiologic and intervention studies to decrease mortality. Established treatments of dyslipidemias are statins and fibrates. However, recent research has established some new potential therapeutic targets which are currently investigated in clinical trials. New therapeutic approaches include subtype selective, dual, and pan-agonists of the peroxisome proliferator activated receptor, inhibitors of the cholesterol ester transfer protein, Acyl-CoA-cholesterol-acyltransferase, squalene synthase, microsomal triglycerid-transfer-protein, and cholesterol absorption. Clinical implications of new drugs under investigation are discussed in this review. 相似文献