全文获取类型
收费全文 | 419篇 |
免费 | 8篇 |
国内免费 | 2篇 |
专业分类
电工技术 | 1篇 |
化学工业 | 57篇 |
金属工艺 | 22篇 |
机械仪表 | 16篇 |
建筑科学 | 15篇 |
矿业工程 | 1篇 |
能源动力 | 41篇 |
轻工业 | 46篇 |
水利工程 | 1篇 |
石油天然气 | 2篇 |
无线电 | 33篇 |
一般工业技术 | 54篇 |
冶金工业 | 9篇 |
原子能技术 | 3篇 |
自动化技术 | 128篇 |
出版年
2024年 | 2篇 |
2023年 | 4篇 |
2022年 | 11篇 |
2021年 | 13篇 |
2020年 | 8篇 |
2019年 | 11篇 |
2018年 | 14篇 |
2017年 | 17篇 |
2016年 | 15篇 |
2015年 | 5篇 |
2014年 | 12篇 |
2013年 | 43篇 |
2012年 | 21篇 |
2011年 | 37篇 |
2010年 | 21篇 |
2009年 | 34篇 |
2008年 | 25篇 |
2007年 | 25篇 |
2006年 | 14篇 |
2005年 | 15篇 |
2004年 | 8篇 |
2003年 | 6篇 |
2002年 | 11篇 |
2001年 | 4篇 |
2000年 | 3篇 |
1999年 | 4篇 |
1998年 | 6篇 |
1997年 | 2篇 |
1996年 | 2篇 |
1995年 | 3篇 |
1994年 | 3篇 |
1993年 | 3篇 |
1992年 | 1篇 |
1991年 | 1篇 |
1990年 | 2篇 |
1989年 | 2篇 |
1988年 | 2篇 |
1987年 | 1篇 |
1986年 | 1篇 |
1985年 | 1篇 |
1984年 | 3篇 |
1982年 | 1篇 |
1980年 | 1篇 |
1979年 | 4篇 |
1978年 | 3篇 |
1977年 | 1篇 |
1971年 | 1篇 |
1943年 | 2篇 |
排序方式: 共有429条查询结果,搜索用时 15 毫秒
11.
Comparison of different classifier algorithms for diagnosing macular and optic nerve diseases 总被引:1,自引:1,他引:0
Abstract: The aim of this research was to compare classifier algorithms including the C4.5 decision tree classifier, the least squares support vector machine (LS-SVM) and the artificial immune recognition system (AIRS) for diagnosing macular and optic nerve diseases from pattern electroretinography signals. The pattern electroretinography signals were obtained by electrophysiological testing devices from 106 subjects who were optic nerve and macular disease subjects. In order to show the test performance of the classifier algorithms, the classification accuracy, receiver operating characteristic curves, sensitivity and specificity values, confusion matrix and 10-fold cross-validation have been used. The classification results obtained are 85.9%, 100% and 81.82% for the C4.5 decision tree classifier, the LS-SVM classifier and the AIRS classifier respectively using 10-fold cross-validation. It is shown that the LS-SVM classifier is a robust and effective classifier system for the determination of macular and optic nerve diseases. 相似文献
12.
In this paper, we have proposed a new feature selection method called kernel F-score feature selection (KFFS) used as pre-processing step in the classification of medical datasets. KFFS consists of two phases. In the first phase, input spaces (features) of medical datasets have been transformed to kernel space by means of Linear (Lin) or Radial Basis Function (RBF) kernel functions. By this way, the dimensions of medical datasets have increased to high dimension feature space. In the second phase, the F-score values of medical datasets with high dimensional feature space have been calculated using F-score formula. And then the mean value of calculated F-scores has been computed. If the F-score value of any feature in medical datasets is bigger than this mean value, that feature will be selected. Otherwise, that feature is removed from feature space. Thanks to KFFS method, the irrelevant or redundant features are removed from high dimensional input feature space. The cause of using kernel functions transforms from non-linearly separable medical dataset to a linearly separable feature space. In this study, we have used the heart disease dataset, SPECT (Single Photon Emission Computed Tomography) images dataset, and Escherichia coli Promoter Gene Sequence dataset taken from UCI (University California, Irvine) machine learning database to test the performance of KFFS method. As classification algorithms, Least Square Support Vector Machine (LS-SVM) and Levenberg–Marquardt Artificial Neural Network have been used. As shown in the obtained results, the proposed feature selection method called KFFS is produced very promising results compared to F-score feature selection. 相似文献
13.
Kemal Polat 《Neural computing & applications》2012,21(8):1987-1994
The forecasting of air pollution is important for living environment and public health. The prediction of SO2 (sulfur dioxide), which is one of the indicators of air pollution, is a significant part of steps to be done in order to decrease the air pollution. In this study, a novel feature scaling method called neighbor-based feature scaling (NBFS) has been proposed and combined with artificial neural network (ANN) and adaptive network–based fuzzy inference system (ANFIS) prediction algorithms in order to predict the SO2 concentration value is from air quality metrics belonging to Konya province in Turkey. This work consists of two stages. In the first stage, SO2 concentration dataset has been scaled using neighbor-based feature scaling. In the second stage, ANN and ANFIS prediction algorithms have been used to forecast the SO2 value of scaled SO2 concentration dataset. SO2 concentration dataset was obtained from Air Quality Statistics database of Turkish Statistical Institute. To constitute dataset, the mean values belonging to seasons of winter period have been used with the aim of watching the air pollution changes between dates of December, 1, 2003 and December, 30, 2005. In order to evaluate the performance of the proposed method, the performance measures including mean absolute error (MAE), mean square error (MSE), root mean square error (RMSE), and IA (Index of Agreement) values have been used. After NBFS method applied to SO2 concentration dataset, the obtained RMSE and IA values are 83.87–0.27 (IA) and 93–0.33 (IA) using ANN and ANFIS, respectively. Without NBFS, the obtained RMSE and IA values are 85.31–0.25 (IA) and 117.71–0.29 (IA) using ANN and ANFIS, respectively. The obtained results have demonstrated that the proposed feature scaling method has been obtained very promising results in the prediction of SO2 concentration values. 相似文献
14.
15.
Though there have been several recent efforts to develop disk based video servers, these approaches have all ignored the topic of updates and disk server crashes. In this paper, we present a priority based model for building video servers that handle two classes of events: user events that could include enter, play, pause, rewind, fast-forward, exit, as well assystem events such as insert, delete, server-down, server-up that correspond to uploading new movie blocks onto the disk(s), eliminating existing blocks from the disk(s), and/or experiencing a disk server crash. We will present algorithms to handle such events. Our algorithms are provably correct, and computable in polynomial time. Furthermore, we guarantee that under certain reasonable conditions, continuing clients experience jitter free presentations. We further justify the efficiency of our techniques with a prototype implementation and experimental results. 相似文献
16.
Kemal Egemen Ozden Kurt Cornelis Luc Van Eycken Luc Van Gool 《Computer Vision and Image Understanding》2004,96(3):453
The 3D reconstruction of scenes containing independently moving objects from uncalibrated monocular sequences still poses serious challenges. Even if the background and the moving objects are rigid, each reconstruction is only known up to a certain scale, which results in a one-parameter family of possible, relative trajectories per moving object with respect to the background. In order to determine a realistic solution from this family of possible trajectories, this paper proposes to exploit the increased linear coupling between camera and object translations that tends to appear at false scales. An independence criterion is formulated in the sense of true object and camera motions being minimally correlated. The increased coupling at false scales can also lead to the destruction of special properties such as planarity, periodicity, etc. of the true object motion. This provides us with a second, ‘non-accidentalness’ criterion for the selection of the correct motion among the one-parameter family. 相似文献
17.
18.
We consider a continuous multi-facility location-allocation problem that aims to minimize the sum of weighted farthest Euclidean distances between (closed convex) polygonal and/or circular demand regions, and facilities they are assigned to. We show that the single facility version of the problem has a straightforward second-order cone programming formulation and can therefore be efficiently solved to optimality. To solve large size instances, we adapt a multi-dimensional direct search descent algorithm to our problem which is not guaranteed to find the optimal solution. In a special case with circular and rectangular demand regions, this algorithm, if converges, finds the optimal solution. We also apply a simple subgradient method to the problem. Furthermore, we review the algorithms proposed for the problem in the literature and compare all these algorithms in terms of both solution quality and time. Finally, we consider the multi-facility version of the problem and model it as a mixed integer second-order cone programming problem. As this formulation is weak, we use the alternate location-allocation heuristic to solve large size instances. 相似文献
19.
Kemal Subulan Mehmet Cakmakci 《The International Journal of Advanced Manufacturing Technology》2012,59(5-8):433-443
Nowadays, so as to adapt to the global market, where competition is getting tougher, firms producing through the modern production approach need to bring not the only performance of the system designed both during the research and development phase and the production phase but also the performance of the product to be developed as well as the process to be improved to the highest level. The Taguchi method is an experimental design technique seeking to minimize the effect of uncontrollable factors, using orthogonal arrays. It can also be designed as a set of plans showing the way data are collected through experiments. Experiments are carried out using factors defined at different levels and a solution model generated in ARENA 3.0 program using SIMAN, which is a simulation language. Many experimental investigations reveal that the speed and capacity of automated-guided vehicle, the capacities of local depots, and the mean time between shipping from the main depot are the major influential parameters that affect the performance criteria of the storage system. For the evaluation of experiment results and effects of related factors, variance analysis and signal/noise ratio are used and the experiments are carried out in MINITAB15 according to Taguchi L16 scheme. The purpose of this study is to prove that experimental design is an utilizable method not only for product development and process improvement but it can also be used effectively in the design of material handling–transfer systems and performance optimization of automation technologies, which are to be integrated to the firms. 相似文献
20.
The effects of size and shape of austenite grains on the extraordinary hardening of steels with transformation induced plasticity (TRIP) have been studied. The deformation and transformation of austenite was followed by interrupted ex situ bending tests using electron backscatter diffraction (EBSD) in a scanning electron microscope (SEM). A finite element model (FEM) was used to relate the EBSD based results obtained in the bending experiments to the hardening behavior obtained from tensile experiments. The results are interpreted using a simple rule of mixture for stress partitioning and a short fiber reinforced composite model. It is found that both, the martensite transformation rate and the flow stress difference between austenite and martensite significantly influence the hardening rate. At the initial stage of deformation mainly larger grains deform, however, they do not reach the same strain level as the smaller grains because they transform into martensite at an early stage of deformation. A composite model was used to investigate the effect of grain shape on load partitioning. The results of the composite model show that higher stresses develop in more elongated grains. These grains tend to transform earlier as it is confirmed by the EBSD observations. 相似文献