全文获取类型
收费全文 | 1694篇 |
免费 | 118篇 |
国内免费 | 1篇 |
专业分类
电工技术 | 24篇 |
综合类 | 2篇 |
化学工业 | 453篇 |
金属工艺 | 35篇 |
机械仪表 | 20篇 |
建筑科学 | 76篇 |
矿业工程 | 6篇 |
能源动力 | 51篇 |
轻工业 | 225篇 |
水利工程 | 12篇 |
石油天然气 | 2篇 |
无线电 | 110篇 |
一般工业技术 | 291篇 |
冶金工业 | 153篇 |
原子能技术 | 6篇 |
自动化技术 | 347篇 |
出版年
2024年 | 3篇 |
2023年 | 17篇 |
2022年 | 16篇 |
2021年 | 62篇 |
2020年 | 36篇 |
2019年 | 59篇 |
2018年 | 64篇 |
2017年 | 63篇 |
2016年 | 66篇 |
2015年 | 56篇 |
2014年 | 90篇 |
2013年 | 118篇 |
2012年 | 129篇 |
2011年 | 113篇 |
2010年 | 107篇 |
2009年 | 103篇 |
2008年 | 97篇 |
2007年 | 88篇 |
2006年 | 66篇 |
2005年 | 55篇 |
2004年 | 49篇 |
2003年 | 42篇 |
2002年 | 51篇 |
2001年 | 25篇 |
2000年 | 34篇 |
1999年 | 16篇 |
1998年 | 28篇 |
1997年 | 24篇 |
1996年 | 18篇 |
1995年 | 18篇 |
1994年 | 8篇 |
1993年 | 14篇 |
1992年 | 13篇 |
1991年 | 12篇 |
1990年 | 3篇 |
1989年 | 5篇 |
1988年 | 2篇 |
1987年 | 4篇 |
1986年 | 2篇 |
1985年 | 5篇 |
1984年 | 3篇 |
1983年 | 4篇 |
1981年 | 4篇 |
1980年 | 2篇 |
1979年 | 4篇 |
1976年 | 5篇 |
1975年 | 2篇 |
1974年 | 3篇 |
1970年 | 1篇 |
1968年 | 1篇 |
排序方式: 共有1813条查询结果,搜索用时 328 毫秒
41.
42.
Esteban J. Palomo Enrique Domínguez Rafael M. Luque-Baena José Muñoz 《Neural Processing Letters》2013,37(1):69-87
Both image compression based on color quantization and image segmentation are two typical tasks in the field of image processing. Several techniques based on splitting algorithms or cluster analyses have been proposed in the literature. Self-organizing maps have been also applied to these problems, although with some limitations due to the fixed network architecture and the lack of representation in hierarchical relations among data. In this paper, both problems are addressed using growing hierarchical self-organizing models. An advantage of these models is due to the hierarchical architecture, which is more flexible in the adaptation process to input data, reflecting inherent hierarchical relations among data. Comparative results are provided for image compression and image segmentation. Experimental results show that the proposed approach is promising for image processing, and the powerful of the hierarchical information provided by the proposed model. 相似文献
43.
Enrique Alfonseca Guillermo Garrido Jean-Yves Delort Anselmo Peñas 《Language Resources and Evaluation》2013,47(4):1163-1190
This paper describes the generation of temporally anchored infobox attribute data from the Wikipedia history of revisions. By mining (attribute, value) pairs from the revision history of the English Wikipedia we are able to collect a comprehensive knowledge base that contains data on how attributes change over time. When dealing with the Wikipedia edit history, vandalic and erroneous edits are a concern for data quality. We present a study of vandalism identification in Wikipedia edits that uses only features from the infoboxes, and show that we can obtain, on this dataset, an accuracy comparable to a state-of-the-art vandalism identification method that is based on the whole article. Finally, we discuss different characteristics of the extracted dataset, which we make available for further study. 相似文献
44.
Functional networks are used to solve some nonlinear regression problems. One particular problem is how to find the optimal transformations of the response and/or the explanatory variables and obtain the best possible functional relation between the response and predictor variables. After a brief introduction to functional networks, two specific transformation models based on functional networks are proposed. Unlike in neural networks, where the selection of the network topology is arbitrary, the selection of the initial topology of a functional network is problem driven. This important feature of functional networks is illustrated for each of the two proposed models. An equivalent, but simpler network may be obtained from the initial topology using functional equations. The resultant model is then checked for uniqueness of representation. When the functions specified by the transformations are unknown in form, families of linear independent functions are used as approximations. Two different parametric criteria are used for learning these functions: the constrained least squares and the maximum canonical correlation. Model selection criteria are used to avoid the problem of overfitting. Finally, performance of the proposed method are assessed and compared to other methods using a simulation study as well as several real-life data. 相似文献
45.
46.
Miguel A. Palacios-Alonso Carlos A. Brizuela L. Enrique Sucar 《Journal of Automated Reasoning》2010,45(1):21-37
Many problems such as voice recognition, speech recognition and many other tasks have been tackled with Hidden Markov Models
(HMMs). These problems can also be dealt with an extension of the Naive Bayesian Classifier (NBC) known as Dynamic NBC (DNBC).
From a dynamic Bayesian network (DBN) perspective, in a DNBC at each time there is a NBC. NBCs work well in data sets with
independent attributes. However, they perform poorly when the attributes are dependent or when there are one or more irrelevant
attributes which are dependent of some relevant ones. Therefore, to increase this classifier accuracy, we need a method to
design network structures that can capture the dependencies and get rid of irrelevant attributes. Furthermore, when we deal
with dynamical processes there are temporal relations that should be considered in the network design. In order to learn automatically
these models from data and increase the classifier accuracy we propose an evolutionary optimization algorithm to solve this
design problem. We introduce a new encoding scheme and new genetic operators which are natural extensions of previously proposed
encoding and operators for grouping problems. The design methodology is applied to solve the recognition problem for nine
hand gestures. Experimental results show that the evolved network has higher average classification accuracy than the basic
DNBC and a HMM. 相似文献
47.
In several domains it is common to have data from different, but closely related problems. For instance, in manufacturing, many products follow the same industrial process but with different conditions; or in industrial diagnosis, where there is equipment with similar specifications. In these cases it is common to have plenty of data for some scenarios but very little for others. In order to learn accurate models for rare cases, it is desirable to use data and knowledge from similar cases; a technique known as transfer learning. In this paper we propose an inductive transfer learning method for Bayesian networks, that considers both structure and parameter learning. For structure learning we use conditional independence tests, by combining measures from the target task with those obtained from one or more auxiliary tasks, using a novel weighted sum of the conditional independence measures. For parameter learning, we propose two variants of the linear pool for probability aggregation, combining the probability estimates from the target task with those from the auxiliary tasks. To validate our approach, we used three Bayesian networks models that are commonly used for evaluating learning techniques, and generated variants of each model by changing the structure as well as the parameters. We then learned one of the variants with a small dataset and combined it with information from the other variants. The experimental results show a significant improvement in terms of structure and parameters when we transfer knowledge from similar tasks. We also evaluated the method with real-world data from a manufacturing process considering several products, obtaining an improvement in terms of log-likelihood between the data and the model when we do transfer learning from related products. 相似文献
48.
Antonio Fernández-CaballeroAuthor Vitae María T. LópezAuthor Vitae Enrique J. CarmonaAuthor VitaeAna E. DelgadoAuthor Vitae 《Neurocomputing》2011,74(8):1175-1181
Certainly, one of the prominent ideas of Professor José Mira was that it is absolutely mandatory to specify the mechanisms and/or processes underlying each task and inference mentioned in an architecture in order to make operational that architecture. The conjecture of the last fifteen years of joint research has been that any bottom-up organization may be made operational using two biologically inspired methods called “algorithmic lateral inhibition”, a generalization of lateral inhibition anatomical circuits, and “accumulative computation”, a working memory related to the temporal evolution of the membrane potential. This paper is dedicated to the computational formulation of both methods. Finally, all of the works of our group related to this methodological approximation are mentioned and summarized, showing that all of them support the validity of this approximation. 相似文献
49.
Pablo Sendín‐Raña Francisco J. González‐Castaño Enrique Pérez‐Barros Pedro S. Rodríguez‐Hernández Felipe Gil‐Castiñeira José M. Pousada‐Carballo 《Software》2009,39(3):279-298
For a long time, the design of relational databases has focused on the optimization of atomic transactions (insert, select, update or delete). Currently, relational databases store tactical information of data warehouses, mainly for select‐like operations. However, the database paradigm has evolved, and nowadays on‐line analytical processing (OLAP) systems handle strategic information for further analysis. These systems enable fast, interactive and consistent information analysis of data warehouses, including shared calculations and allocations. OLAP and data warehouses jointly allow multidimensional data views, turning raw data into knowledge. OLAP allows ‘slice and dice’ navigation and a top‐down perspective of data hierarchies. In this paper, we describe our experience in the migration from a large relational database management system to an OLAP system on top of a relational layer (the data warehouse), and the resulting contributions in open‐source ROLAP optimization. Existing open‐source ROLAP technologies rely on summarized tables with materialized aggregate views to improve system performance (in terms of response time). The design and maintenance of those tables are cumbersome. Instead, we intensively exploit cache memory, where key data reside, yielding low response times. A cold start process brings summarized data from the relational database to cache memory, subsequently reducing the response time. We ensure concurrent access to the summarized data, as well as consistency when the relational database updates data. We also improve the OLAP functionality, by providing new features for automating the creation of calculated members. This makes it possible to define new measures on the fly using virtual dimensions, without re‐designing the multidimensional cube. We have chosen the XML/A de facto standard for service provision. Copyright © 2008 John Wiley & Sons, Ltd. 相似文献
50.
José Ramón González de Mendívil José Enrique Armendáriz-Iñigo José Ramón Garitagoitia Francesc D. Muñoz-Escoí 《The Journal of supercomputing》2009,50(2):121-161
This paper provides a formal specification and proof of correctness of a basic Generalized Snapshot Isolation certification-based data replication protocol for database middleware architectures. It has been modeled using a state transition
system, as well as the main system components, allowing a perfect match with the usual deployment in a middleware system.
The proof encompasses both safety and liveness properties, as it is commonly done for a distributed algorithm. Furthermore,
a crash failure model has been assumed for the correctness proof, although recovery analysis is not the aim of this paper.
This allows an easy extension toward a crash-recovery model support in future works. The liveness proof focuses in the uniform
commit: if a site has committed a transaction, the rest of sites will either commit it or it would have crashed. 相似文献