全文获取类型
收费全文 | 2339篇 |
免费 | 123篇 |
国内免费 | 3篇 |
专业分类
电工技术 | 31篇 |
化学工业 | 665篇 |
金属工艺 | 41篇 |
机械仪表 | 45篇 |
建筑科学 | 60篇 |
矿业工程 | 2篇 |
能源动力 | 70篇 |
轻工业 | 378篇 |
水利工程 | 17篇 |
石油天然气 | 14篇 |
无线电 | 144篇 |
一般工业技术 | 356篇 |
冶金工业 | 198篇 |
原子能技术 | 35篇 |
自动化技术 | 409篇 |
出版年
2024年 | 6篇 |
2023年 | 33篇 |
2022年 | 111篇 |
2021年 | 125篇 |
2020年 | 79篇 |
2019年 | 91篇 |
2018年 | 92篇 |
2017年 | 91篇 |
2016年 | 91篇 |
2015年 | 74篇 |
2014年 | 124篇 |
2013年 | 188篇 |
2012年 | 162篇 |
2011年 | 156篇 |
2010年 | 98篇 |
2009年 | 112篇 |
2008年 | 107篇 |
2007年 | 97篇 |
2006年 | 75篇 |
2005年 | 58篇 |
2004年 | 55篇 |
2003年 | 58篇 |
2002年 | 44篇 |
2001年 | 28篇 |
2000年 | 24篇 |
1999年 | 19篇 |
1998年 | 63篇 |
1997年 | 38篇 |
1996年 | 29篇 |
1995年 | 15篇 |
1994年 | 20篇 |
1993年 | 23篇 |
1992年 | 5篇 |
1991年 | 6篇 |
1990年 | 11篇 |
1989年 | 7篇 |
1987年 | 5篇 |
1986年 | 2篇 |
1985年 | 2篇 |
1983年 | 4篇 |
1982年 | 3篇 |
1981年 | 2篇 |
1980年 | 3篇 |
1978年 | 4篇 |
1977年 | 4篇 |
1976年 | 7篇 |
1971年 | 2篇 |
1969年 | 2篇 |
1968年 | 2篇 |
1967年 | 2篇 |
排序方式: 共有2465条查询结果,搜索用时 15 毫秒
31.
This paper presents a new approach to Particle Swarm Optimization, called Michigan Approach PSO (MPSO), and its application
to continuous classification problems as a Nearest Prototype (NP) classifier. In Nearest Prototype classifiers, a collection
of prototypes has to be found that accurately represents the input patterns. The classifier then assigns classes based on
the nearest prototype in this collection. The MPSO algorithm is used to process training data to find those prototypes. In
the MPSO algorithm each particle in a swarm represents a single prototype in the solution and it uses modified movement rules
with particle competition and cooperation that ensure particle diversity. The proposed method is tested both with artificial
problems and with real benchmark problems and compared with several algorithms of the same family. Results show that the particles
are able to recognize clusters, find decision boundaries and reach stable situations that also retain adaptation potential.
The MPSO algorithm is able to improve the accuracy of 1-NN classifiers, obtains results comparable to the best among other
classifiers, and improves the accuracy reported in literature for one of the problems.
相似文献
Pedro IsasiEmail: |
32.
In this paper, the performance and durability of hybrid PEM fuel cell vehicles are investigated. To that end, a hybrid predictive controller is proposed to improve battery performance and to avoid fuel cell and battery degradation. Such controller deals with this complex control problem by handling binary and continuous variables, piecewise affine models and constraints. Moreover, the control strategy is to track motor power demand and keep batteries close to a desired battery state of charge which is appropriately chosen to minimize hydrogen consumption. It is important to highlight the consideration of constraints which are directly related to the goals of this paper, such as minimum fuel cell power threshold and time limitation between fuel cell startups and shutdowns. Furthermore, different models have been elaborated and particularized for a vehicle prototype. These models include few innovations such as a reference governor which smooths fuel cell power demand during sharp power profiles, forcing batteries to supply such peaks and resulting a longer fuel cell lifetime. Battery thermal dynamics are also taken into account in these models in order to analyze the effect of battery temperature on its degradation. Finally, this paper studies the feasibility of the real implementation, presenting an explicit formulation as a solution to reduce execution time. This explicit controller exhibits the same performance as the hybrid predictive controller does with a reduced computational effort. All the results have been validated in several simulations. 相似文献
33.
Alejandro Rago Claudia Marcos J. Andres Diaz-Pace 《Automated Software Engineering》2016,23(2):219-252
Textual requirements are very common in software projects. However, this format of requirements often keeps relevant concerns (e.g., performance, synchronization, data access, etc.) from the analyst’s view because their semantics are implicit in the text. Thus, analysts must carefully review requirements documents in order to identify key concerns and their effects. Concern mining tools based on NLP techniques can help in this activity. Nonetheless, existing tools cannot always detect all the crosscutting effects of a given concern on different requirements sections, as this detection requires a semantic analysis of the text. In this work, we describe an automated tool called REAssistant that supports the extraction of semantic information from textual use cases in order to reveal latent crosscutting concerns. To enable the analysis of use cases, we apply a tandem of advanced NLP techniques (e.g, dependency parsing, semantic role labeling, and domain actions) built on the UIMA framework, which generates different annotations for the use cases. Then, REAssistant allows analysts to query these annotations via concern-specific rules in order to identify all the effects of a given concern. The REAssistant tool has been evaluated with several case-studies, showing good results when compared to a manual identification of concerns and a third-party tool. In particular, the tool achieved a remarkable recall regarding the detection of crosscutting concern effects. 相似文献
34.
35.
aITALC, a new tool for automating loop calculations in high energy physics, is described. The package creates Fortran code for two-fermion scattering processes automatically, starting from the generation and analysis of the Feynman graphs. We describe the modules of the tool, the intercommunication between them and illustrate its use with three examples.
Program summary
Title of the program:aITALC version 1.2.1 (9 August 2005)Catalogue identifier:ADWOProgram summary URL:http://cpc.cs.qub.ac.uk/summaries/ADWOProgram obtainable from:CPC Program Library, Queen's University of Belfast, N. IrelandComputer:PC i386Operating system:GNU/Linux, tested on different distributions SuSE 8.2 to 9.3, Red Hat 7.2, Debian 3.0, Ubuntu 5.04. Also on SolarisProgramming language used:GNU Make, Diana, Form, Fortran77Additional programs/libraries used:Diana 2.35 (Qgraf 2.0), Form 3.1, LoopTools 2.1 (FF)Memory required to execute with typical data:Up to about 10 MBNo. of processors used:1No. of lines in distributed program, including test data, etc.:40 926No. of bytes in distributed program, including test data, etc.:371 424Distribution format:tar gzip fileHigh-speed storage required:from 1.5 to 30 MB, depending on modules present and unfolding of examplesNature of the physical problem:Calculation of differential cross sections for e+e− annihilation in one-loop approximation.Method of solution:Generation and perturbative analysis of Feynman diagrams with later evaluation of matrix elements and form factors.Restriction of the complexity of the problem:The limit of application is, for the moment, the 2→2 particle reactions in the electro-weak standard model.Typical running time:Few minutes, being highly depending on the complexity of the process and the Fortran compiler. 相似文献36.
Butakoff C Frangi AF 《IEEE transactions on pattern analysis and machine intelligence》2006,28(11):1847-1857
This paper presents a framework for weighted fusion of several active shape and active appearance models. The approach is based on the eigenspace fusion method proposed by Hall et al., which has been extended to fuse more than two weighted eigenspaces using unbiased mean and covariance matrix estimates. To evaluate the performance of fusion, a comparative assessment on segmentation precision as well as facial verification tests are performed using the AR, EQUINOX, and XM2VTS databases. Based on the results, it is concluded that the fusion is useful when the model needs to be updated online or when the original observations are absent 相似文献
37.
The use of nanoparticles (NPs) in manufacturing continues to increase despite the growing concern over their potential environmental and health effects. Understanding the interaction of NPs and sewage sludge is crucial for determining the ultimate fate of NPs released to municipal wastewater treatment plants (WWTPs) as those interactions will determine whether the bulk of the material is retained in the sludge or released in the effluent stream. Analyzing the affinity of aluminum oxide, cerium oxide, and silicon oxide NPs, which are commonly used in semiconductor manufacturing processes, for biosolids used in municipal WWTPs provides a basis for estimating their removal efficiency. Batch studies were performed and the NPs were shown to partition onto the cellular surface. At the maximum equilibrium values tested (75-92 mg nanoparticles/L), the concentration of Al(2)O(3), CeO(2) and SiO(2) associated with the sludge was 137, 238, and 28 mg/g-sludge VSS, respectively. These results suggest that electrostatic interactions play a major role in determining NP association with biosolids. 相似文献
38.
Alejandro Echeverría Matías Améstica Francisca Gil Miguel Nussbaum Enrique Barrios Sandra Leclerc 《Computers in human behavior》2012
Computer Supported Collaborative Learning is a pedagogical approach that can be used for deploying educational games in the classroom. However, there is no clear understanding as to which technological platforms are better suited for deploying co-located collaborative games, nor the general affordances that are required. In this work we explore two different technological platforms for developing collaborative games in the classroom: one based on augmented reality technology and the other based on multiple-mice technology. In both cases, the same game was introduced to teach electrostatics and the results were compared experimentally using a real class. 相似文献
39.
Question–answering systems make good use of knowledge bases (KBs, e.g., Wikipedia) for responding to definition queries. Typically, systems extract relevant facts from articles regarding the question across KBs, and then they are projected into the candidate answers. However, studies have shown that the performance of this kind of method suddenly drops, whenever KBs supply narrow coverage. This work describes a new approach to deal with this problem by constructing context models for scoring candidate answers, which are, more precisely, statistical n‐gram language models inferred from lexicalized dependency paths extracted from Wikipedia abstracts. Unlike state‐of‐the‐art approaches, context models are created by capturing the semantics of candidate answers (e.g., “novel,”“singer,”“coach,” and “city”). This work is extended by investigating the impact on context models of extra linguistic knowledge such as part‐of‐speech tagging and named‐entity recognition. Results showed the effectiveness of context models as n‐gram lexicalized dependency paths and promising context indicators for the presence of definitions in natural language texts. 相似文献
40.