全文获取类型
收费全文 | 5768篇 |
免费 | 447篇 |
国内免费 | 53篇 |
专业分类
电工技术 | 71篇 |
综合类 | 68篇 |
化学工业 | 1550篇 |
金属工艺 | 89篇 |
机械仪表 | 152篇 |
建筑科学 | 219篇 |
矿业工程 | 38篇 |
能源动力 | 196篇 |
轻工业 | 1122篇 |
水利工程 | 61篇 |
石油天然气 | 58篇 |
武器工业 | 2篇 |
无线电 | 410篇 |
一般工业技术 | 822篇 |
冶金工业 | 272篇 |
原子能技术 | 18篇 |
自动化技术 | 1120篇 |
出版年
2024年 | 16篇 |
2023年 | 76篇 |
2022年 | 207篇 |
2021年 | 289篇 |
2020年 | 172篇 |
2019年 | 220篇 |
2018年 | 239篇 |
2017年 | 221篇 |
2016年 | 244篇 |
2015年 | 191篇 |
2014年 | 281篇 |
2013年 | 456篇 |
2012年 | 394篇 |
2011年 | 470篇 |
2010年 | 332篇 |
2009年 | 306篇 |
2008年 | 299篇 |
2007年 | 288篇 |
2006年 | 195篇 |
2005年 | 192篇 |
2004年 | 147篇 |
2003年 | 138篇 |
2002年 | 122篇 |
2001年 | 71篇 |
2000年 | 71篇 |
1999年 | 73篇 |
1998年 | 128篇 |
1997年 | 87篇 |
1996年 | 55篇 |
1995年 | 42篇 |
1994年 | 39篇 |
1993年 | 28篇 |
1992年 | 28篇 |
1991年 | 21篇 |
1990年 | 22篇 |
1989年 | 19篇 |
1988年 | 8篇 |
1987年 | 4篇 |
1986年 | 3篇 |
1985年 | 11篇 |
1984年 | 7篇 |
1983年 | 6篇 |
1982年 | 7篇 |
1981年 | 5篇 |
1980年 | 4篇 |
1979年 | 3篇 |
1978年 | 6篇 |
1977年 | 11篇 |
1976年 | 5篇 |
1975年 | 4篇 |
排序方式: 共有6268条查询结果,搜索用时 13 毫秒
141.
The localization of the components of an object near to a device before obtaining the real interaction is usually determined by means of a proximity measurement to the device of the object’s features. In order to do this efficiently, hierarchical decompositions are used, so that the features of the objects are classified into several types of cells, usually rectangular.In this paper we propose a solution based on the classification of a set of points situated on the device in a little-known spatial decomposition named tetra-tree. Using this type of spatial decomposition gives us several quantitative and qualitative properties that allow us a more realistic and intuitive visual interaction, as well as the possibility of selecting inaccessible components. These features could be used in virtual sculpting or accessibility tasks.In order to show these properties we have compared an interaction system based on tetra-trees to one based on octrees. 相似文献
142.
Artur J. Lemonte Francisco Cribari-Neto 《Computational statistics & data analysis》2010,54(5):1307-718
The Birnbaum-Saunders regression model is commonly used in reliability studies. We address the issue of performing inference in this class of models when the number of observations is small. Our simulation results suggest that the likelihood ratio test tends to be liberal when the sample size is small. We obtain a correction factor which reduces the size distortion of the test. Also, we consider a parametric bootstrap scheme to obtain improved critical values and improved p-values for the likelihood ratio test. The numerical results show that the modified tests are more reliable in finite samples than the usual likelihood ratio test. We also present an empirical application. 相似文献
143.
Eufrsio de A. Lima Neto Francisco de A.T. de Carvalho 《Computational statistics & data analysis》2010,54(2):333-347
This paper introduces an approach to fitting a constrained linear regression model to interval-valued data. Each example of the learning set is described by a feature vector for which each feature value is an interval. The new approach fits a constrained linear regression model on the midpoints and range of the interval values assumed by the variables in the learning set. The prediction of the lower and upper boundaries of the interval value of the dependent variable is accomplished from its midpoint and range, which are estimated from the fitted linear regression models applied to the midpoint and range of each interval value of the independent variables. This new method shows the importance of range information in prediction performance as well as the use of inequality constraints to ensure mathematical coherence between the predicted values of the lower () and upper () boundaries of the interval. The authors also propose an expression for the goodness-of-fit measure denominated determination coefficient. The assessment of the proposed prediction method is based on the estimation of the average behavior of the root-mean-square error and square of the correlation coefficient in the framework of a Monte Carlo experiment with different data set configurations. Among other aspects, the synthetic data sets take into account the dependence, or lack thereof, between the midpoint and range of the intervals. The bias produced by the use of inequality constraints over the vector of parameters is also examined in terms of the mean-square error of the parameter estimates. Finally, the approaches proposed in this paper are applied to a real data set and performances are compared. 相似文献
144.
Francisco J. Pino Author Vitae Oscar Pedreira Author Vitae 《Journal of Systems and Software》2010,83(10):1662-1677
For software process improvement - SPI - there are few small organizations using models that guide the management and deployment of their improvement initiatives. This is largely because a lot of these models do not consider the special characteristics of small businesses, nor the appropriate strategies for deploying an SPI initiative in this type of organization. It should also be noted that the models which direct improvement implementation for small settings do not present an explicit process with which to organize and guide the internal work of the employees involved in the implementation of the improvement opportunities. In this paper we propose a lightweight process, which takes into account appropriate strategies for this type of organization. Our proposal, known as a “Lightweight process to incorporate improvements”, uses the philosophy of the Scrum agile method, aiming to give detailed guidelines for supporting the management and performance of the incorporation of improvement opportunities within processes and their putting into practice in small companies. We have applied the proposed process in two small companies by means of the case study research method, and from the initial results, we have observed that it is indeed suitable for small businesses. 相似文献
145.
Virginia Francisco Pablo Gervás Federico Peinado 《Knowledge and Information Systems》2010,25(3):421-443
With the advent of affective computing, the task of adequately identifying, representing and processing the emotional connotations
of text has acquired importance. Two problems facing this task are addressed in this paper: the composition of sentence emotion
from word emotion, and a representation of emotion that allows easy conversion between existing computational representations.
The emotion of a sentence of text should be derived by composition of the emotions of the words in the sentence, but no method
has been proposed so far to model this compositionality. Of the various existing approaches for representing emotions, some
are better suited for some problems and some for others, but there is no easy way of converting from one to another. This
paper presents a system that addresses these two problems by reasoning with two ontologies implemented with Semantic Web technologies:
one designed to represent word dependency relations within a sentence, and one designed to represent emotions. The ontology
of word dependency relies on roles to represent the way emotional contributions project over word dependencies. By applying
automated classification of mark-up results in terms of the emotion ontology the system can interpret unrestricted input in
terms of a restricted set of concepts for which particular rules are provided. The rules applied at the end of the process
provide configuration parameters for a system for emotional voice synthesis. 相似文献
146.
Francisco J. Pino César Pardo Félix García Mario Piattini 《Information and Software Technology》2010,52(10):1044-1061
ContextDiagnosing processes in a small company requires process assessment practices which give qualitative and quantitative results; these should offer an overall view of the process capability. The purpose is to obtain relevant information about the running of processes, for use in their control and improvement. However, small organizations have some problems in running process assessment, due to their specific characteristics and limitations.ObjectiveThis paper presents a methodology for assessing software processes which assist the activity of software process diagnosis in small organizations. There is an attempt to address issues such as the fact that: (i) process assessment is expensive and typically requires major company resources and (ii) many light assessment methods do not provide information that is detailed enough for diagnosing and improving processes.MethodTo achieve all this, the METvalCOMPETISOFT assessment methodology was developed. This methodology: (i) incorporates the strategy of internal assessments known as rapid assessment, meaning that these assessments do not take up too much time or use an excessive quantity of resources, nor are they too rigorous and (ii) meets all the requirements described in the literature for an assessment proposal which is customized to the typical features of small companies.ResultsThis paper also describes the experience of the application of this methodology in eight small software organizations that took part in the COMPETISOFT project. The results obtained show that this approach allows us to obtain reliable information about the strengths and weaknesses of software processes, along with information to companies on opportunities for improvement.ConclusionThe assessment methodology proposed sets out the elements needed to assist with diagnosing the process in small organizations step-by-step while seeking to make its application economically feasible in terms of resources and time. From the initial application it may be seen that this assessment methodology can be useful, practical and suitable for diagnosing processes in this type of organizations. 相似文献
147.
Carlos Valle Francisco Saravia Héctor Allende Raúl Monge César Fernández 《Neural Processing Letters》2010,32(3):277-291
Ensemble learning has gained considerable attention in different tasks including regression, classification and clustering. Adaboost and Bagging are two popular approaches used to train these models. The former provides accurate estimations in regression settings but is computationally expensive because of its inherently sequential structure, while the latter is less accurate but highly efficient. One of the drawbacks of the ensemble algorithms is the high computational cost of the training stage. To address this issue, we propose a parallel implementation of the Resampling Local Negative Correlation (RLNC) algorithm for training a neural network ensemble in order to acquire a competitive accuracy like that of Adaboost and an efficiency comparable to that of Bagging. We test our approach on both synthetic and real datasets from the UCI and Statlib repositories for the regression task. In particular, our fine-grained parallel approach allows us to achieve a satisfactory balance between accuracy and parallel efficiency. 相似文献
148.
Simulation of complex mechatronic systems like an automobile, involving mechanical components as well as actuators and active
electronic control devices, can be accomplished by combining tools that deal with the simulation of the different subsystems.
In this sense, it is often desirable to couple a multibody simulation software (for the mechanical simulation) with external
numerical computing environments and block diagram simulators (for the modeling and simulation of nonmechanical components). 相似文献
149.
In this paper we present adaptive algorithms for solving the uniform continuous piecewise affine approximation problem (UCPA)
in the case of Lipschitz or convex functions. The algorithms are based on the tree approximation (adaptive splitting) procedure.
The uniform convergence is achieved by means of global optimization techniques for obtaining tight upper bounds of the local
error estimate (splitting criterion). We give numerical results in the case of the function distance to 2D and 3D geometric
bodies. The resulting trees can retrieve the values of the target function in a fast way. 相似文献
150.
A new fast prototype selection method based on clustering 总被引:2,自引:1,他引:1
J. Arturo Olvera-López J. Ariel Carrasco-Ochoa J. Francisco Martínez-Trinidad 《Pattern Analysis & Applications》2010,13(2):131-141
In supervised classification, a training set T is given to a classifier for classifying new prototypes. In practice, not all information in T is useful for classifiers, therefore, it is convenient to discard irrelevant prototypes from T. This process is known as prototype selection, which is an important task for classifiers since through this process the
time for classification or training could be reduced. In this work, we propose a new fast prototype selection method for large
datasets, based on clustering, which selects border prototypes and some interior prototypes. Experimental results showing
the performance of our method and comparing accuracy and runtimes against other prototype selection methods are reported. 相似文献