全文获取类型
收费全文 | 2836篇 |
免费 | 271篇 |
国内免费 | 1篇 |
专业分类
电工技术 | 29篇 |
综合类 | 1篇 |
化学工业 | 704篇 |
金属工艺 | 49篇 |
机械仪表 | 115篇 |
建筑科学 | 89篇 |
矿业工程 | 7篇 |
能源动力 | 100篇 |
轻工业 | 651篇 |
水利工程 | 27篇 |
石油天然气 | 13篇 |
无线电 | 150篇 |
一般工业技术 | 466篇 |
冶金工业 | 70篇 |
原子能技术 | 26篇 |
自动化技术 | 611篇 |
出版年
2024年 | 14篇 |
2023年 | 45篇 |
2022年 | 101篇 |
2021年 | 185篇 |
2020年 | 122篇 |
2019年 | 146篇 |
2018年 | 152篇 |
2017年 | 148篇 |
2016年 | 149篇 |
2015年 | 124篇 |
2014年 | 177篇 |
2013年 | 359篇 |
2012年 | 233篇 |
2011年 | 226篇 |
2010年 | 171篇 |
2009年 | 155篇 |
2008年 | 116篇 |
2007年 | 88篇 |
2006年 | 72篇 |
2005年 | 44篇 |
2004年 | 42篇 |
2003年 | 45篇 |
2002年 | 37篇 |
2001年 | 15篇 |
2000年 | 15篇 |
1999年 | 14篇 |
1998年 | 23篇 |
1997年 | 14篇 |
1996年 | 7篇 |
1995年 | 10篇 |
1994年 | 7篇 |
1993年 | 14篇 |
1992年 | 3篇 |
1990年 | 4篇 |
1989年 | 2篇 |
1988年 | 2篇 |
1987年 | 2篇 |
1985年 | 2篇 |
1983年 | 2篇 |
1981年 | 3篇 |
1980年 | 1篇 |
1979年 | 1篇 |
1978年 | 2篇 |
1975年 | 1篇 |
1972年 | 1篇 |
1971年 | 2篇 |
1965年 | 2篇 |
1964年 | 1篇 |
1963年 | 2篇 |
1938年 | 2篇 |
排序方式: 共有3108条查询结果,搜索用时 31 毫秒
71.
Efrén Aguilar-Garnica Denis Dochain Víctor Alcaraz-González Víctor González-Álvarez 《Journal of Process Control》2009,19(8):1324-1332
This paper deals with the design and application of a nonlinear multivariable controller in an anaerobic digestion system (AD) carried out in two interconnected fixed bed bioreactors. The proposed control scheme is derived from a mathematical model of the AD system described by a set of partial differential equations and consists of an estimator and two nonlinear control laws. The first law is developed to regulate the volatile fatty acids in the first bioreactor while the second aims at maintaining the chemical oxygen demand at predetermined set-points in the second bioreactor. The performance of the control algorithm is evaluated via numerical simulations in the face of load disturbances, parameter kinetic uncertainties and set-point changes. Stability and convergence properties of the proposed control scheme are also addressed in this paper. 相似文献
72.
Alejandro Rago Claudia Marcos J. Andres Diaz-Pace 《Automated Software Engineering》2016,23(2):219-252
Textual requirements are very common in software projects. However, this format of requirements often keeps relevant concerns (e.g., performance, synchronization, data access, etc.) from the analyst’s view because their semantics are implicit in the text. Thus, analysts must carefully review requirements documents in order to identify key concerns and their effects. Concern mining tools based on NLP techniques can help in this activity. Nonetheless, existing tools cannot always detect all the crosscutting effects of a given concern on different requirements sections, as this detection requires a semantic analysis of the text. In this work, we describe an automated tool called REAssistant that supports the extraction of semantic information from textual use cases in order to reveal latent crosscutting concerns. To enable the analysis of use cases, we apply a tandem of advanced NLP techniques (e.g, dependency parsing, semantic role labeling, and domain actions) built on the UIMA framework, which generates different annotations for the use cases. Then, REAssistant allows analysts to query these annotations via concern-specific rules in order to identify all the effects of a given concern. The REAssistant tool has been evaluated with several case-studies, showing good results when compared to a manual identification of concerns and a third-party tool. In particular, the tool achieved a remarkable recall regarding the detection of crosscutting concern effects. 相似文献
73.
Alejandro Baldominos Javier Calle Dolores Cuadra 《Pattern Analysis & Applications》2017,20(1):269-285
This work aims at discovering and extracting relevant patterns underlying social interactions. To do so, some knowledge extracted from Facebook, a social networking site, is formalised by means of an Extended Social Graph, a data structure which goes beyond the original concept of a social graph by also incorporating information on interests. When the Extended Social Graph is built, state-of-the-art techniques are applied over it in order to discover communities. Once these social communities are found, statistical techniques will look for relevant patterns common to each of those, in such a way that each cluster of users is characterised by a set of common features. The resulting knowledge will be used to develop and evaluate a social recommender system, which aims at suggesting users in a social network with possible friends or interests. 相似文献
74.
Francisco García‐Sánchez Luís Álvarez Sabucedo Rodrigo Martínez‐Béjar Luís Anido Rifón Rafael Valencia‐García Juan Miguel Gómez 《Expert Systems》2011,28(5):416-436
The increasing volume of eGovernment‐related services is demanding new approaches for service integration and interoperability in this domain. Semantic web (SW) technologies and applications can leverage the potential of eGovernment service integration and discovery, thus tackling the problems of semantic heterogeneity characterizing eGovernment information sources and the different levels of interoperability. eGovernment services will therefore be semantically described in the foreseeable future. In an environment with semantically annotated services, software agents are essential as the entities responsible for exploiting the semantic content in order to automate some tasks, and so enhance the user's experience. In this paper, we present a framework that provides a seamless integration of semantic web services and intelligent agents technologies by making use of ontologies to facilitate their interoperation. The proposed framework can assist in the development of powerful and flexible distributed systems in complex, dynamic, heterogeneous, unpredictable and open environments. Our approach is backed up by a proof‐of‐concept implementation, where the breakthrough of integrating disparate eGovernment services has been tested. 相似文献
75.
76.
aITALC, a new tool for automating loop calculations in high energy physics, is described. The package creates Fortran code for two-fermion scattering processes automatically, starting from the generation and analysis of the Feynman graphs. We describe the modules of the tool, the intercommunication between them and illustrate its use with three examples.
Program summary
Title of the program:aITALC version 1.2.1 (9 August 2005)Catalogue identifier:ADWOProgram summary URL:http://cpc.cs.qub.ac.uk/summaries/ADWOProgram obtainable from:CPC Program Library, Queen's University of Belfast, N. IrelandComputer:PC i386Operating system:GNU/Linux, tested on different distributions SuSE 8.2 to 9.3, Red Hat 7.2, Debian 3.0, Ubuntu 5.04. Also on SolarisProgramming language used:GNU Make, Diana, Form, Fortran77Additional programs/libraries used:Diana 2.35 (Qgraf 2.0), Form 3.1, LoopTools 2.1 (FF)Memory required to execute with typical data:Up to about 10 MBNo. of processors used:1No. of lines in distributed program, including test data, etc.:40 926No. of bytes in distributed program, including test data, etc.:371 424Distribution format:tar gzip fileHigh-speed storage required:from 1.5 to 30 MB, depending on modules present and unfolding of examplesNature of the physical problem:Calculation of differential cross sections for e+e− annihilation in one-loop approximation.Method of solution:Generation and perturbative analysis of Feynman diagrams with later evaluation of matrix elements and form factors.Restriction of the complexity of the problem:The limit of application is, for the moment, the 2→2 particle reactions in the electro-weak standard model.Typical running time:Few minutes, being highly depending on the complexity of the process and the Fortran compiler. 相似文献77.
Butakoff C Frangi AF 《IEEE transactions on pattern analysis and machine intelligence》2006,28(11):1847-1857
This paper presents a framework for weighted fusion of several active shape and active appearance models. The approach is based on the eigenspace fusion method proposed by Hall et al., which has been extended to fuse more than two weighted eigenspaces using unbiased mean and covariance matrix estimates. To evaluate the performance of fusion, a comparative assessment on segmentation precision as well as facial verification tests are performed using the AR, EQUINOX, and XM2VTS databases. Based on the results, it is concluded that the fusion is useful when the model needs to be updated online or when the original observations are absent 相似文献
78.
Alejandro Echeverría Matías Améstica Francisca Gil Miguel Nussbaum Enrique Barrios Sandra Leclerc 《Computers in human behavior》2012
Computer Supported Collaborative Learning is a pedagogical approach that can be used for deploying educational games in the classroom. However, there is no clear understanding as to which technological platforms are better suited for deploying co-located collaborative games, nor the general affordances that are required. In this work we explore two different technological platforms for developing collaborative games in the classroom: one based on augmented reality technology and the other based on multiple-mice technology. In both cases, the same game was introduced to teach electrostatics and the results were compared experimentally using a real class. 相似文献
79.
Question–answering systems make good use of knowledge bases (KBs, e.g., Wikipedia) for responding to definition queries. Typically, systems extract relevant facts from articles regarding the question across KBs, and then they are projected into the candidate answers. However, studies have shown that the performance of this kind of method suddenly drops, whenever KBs supply narrow coverage. This work describes a new approach to deal with this problem by constructing context models for scoring candidate answers, which are, more precisely, statistical n‐gram language models inferred from lexicalized dependency paths extracted from Wikipedia abstracts. Unlike state‐of‐the‐art approaches, context models are created by capturing the semantics of candidate answers (e.g., “novel,”“singer,”“coach,” and “city”). This work is extended by investigating the impact on context models of extra linguistic knowledge such as part‐of‐speech tagging and named‐entity recognition. Results showed the effectiveness of context models as n‐gram lexicalized dependency paths and promising context indicators for the presence of definitions in natural language texts. 相似文献
80.