首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   266篇
  免费   28篇
  国内免费   4篇
电工技术   7篇
综合类   2篇
化学工业   45篇
金属工艺   8篇
机械仪表   41篇
建筑科学   2篇
能源动力   4篇
轻工业   35篇
水利工程   8篇
石油天然气   1篇
无线电   9篇
一般工业技术   47篇
冶金工业   3篇
原子能技术   1篇
自动化技术   85篇
  2023年   3篇
  2022年   5篇
  2021年   6篇
  2020年   4篇
  2019年   11篇
  2018年   18篇
  2017年   21篇
  2016年   17篇
  2015年   15篇
  2014年   18篇
  2013年   34篇
  2012年   24篇
  2011年   17篇
  2010年   16篇
  2009年   17篇
  2008年   15篇
  2007年   9篇
  2006年   6篇
  2005年   5篇
  2004年   3篇
  2003年   4篇
  2002年   2篇
  2001年   5篇
  2000年   4篇
  1999年   1篇
  1998年   2篇
  1996年   2篇
  1995年   4篇
  1994年   4篇
  1993年   1篇
  1992年   3篇
  1990年   1篇
  1980年   1篇
排序方式: 共有298条查询结果,搜索用时 0 毫秒
1.
Protection of Metals and Physical Chemistry of Surfaces - To improve the mechanical and biological properties and also to increase the lifetime and performance of Ti–6Al–4V dental...  相似文献   
2.
Homogeneous copolymers of N-vinylpyrrolidone (VP) and vinyl acetate (VA) which form clear aqueous solutions were prepared by free radical polymerization in a solution of isopropanol alcohol, using 2,2-azobisisobutyronitrile as an initiator. They were characterized by FTIR, 1H-NMR, and element analysis studies. The reactivity ratios of the monomer were computed by the Extended Kelen–Tüdós method at high conversions, using data from both 1H-NMR and elemental analysis studies. The reactivity ratios of VP and VA in a homogenous copolymer were observed to be very different from that of a heterogeneous copolymer. Additional information was obtained by finding out the sequence length distribution for copolymers.  相似文献   
3.
Two important problems which can affect the performance of classification models are high-dimensionality (an overabundance of independent features in the dataset) and imbalanced data (a skewed class distribution which creates at least one class with many fewer instances than other classes). To resolve these problems concurrently, we propose an iterative feature selection approach, which repeated applies data sampling (in order to address class imbalance) followed by feature selection (in order to address high-dimensionality), and finally we perform an aggregation step which combines the ranked feature lists from the separate iterations of sampling. This approach is designed to find a ranked feature list which is particularly effective on the more balanced dataset resulting from sampling while minimizing the risk of losing data through the sampling step and missing important features. To demonstrate this technique, we employ 18 different feature selection algorithms and Random Undersampling with two post-sampling class distributions. We also investigate the use of sampling and feature selection without the iterative step (e.g., using the ranked list from a single iteration, rather than combining the lists from multiple iterations), and compare these results from the version which uses iteration. Our study is carried out using three groups of datasets with different levels of class balance, all of which were collected from a real-world software system. All of our experiments use four different learners and one feature subset size. We find that our proposed iterative feature selection approach outperforms the non-iterative approach.  相似文献   
4.
Neural Processing Letters - Deep learning is an important subcategory of machine learning approaches in which there is a hope of replacing man-made features with fully automatic extracted features....  相似文献   
5.
6.
The problem of missing values in software measurement data used in empirical analysis has led to the proposal of numerous potential solutions. Imputation procedures, for example, have been proposed to ‘fill-in’ the missing values with plausible alternatives. We present a comprehensive study of imputation techniques using real-world software measurement datasets. Two different datasets with dramatically different properties were utilized in this study, with the injection of missing values according to three different missingness mechanisms (MCAR, MAR, and NI). We consider the occurrence of missing values in multiple attributes, and compare three procedures, Bayesian multiple imputation, k Nearest Neighbor imputation, and Mean imputation. We also examine the relationship between noise in the dataset and the performance of the imputation techniques, which has not been addressed previously. Our comprehensive experiments demonstrate conclusively that Bayesian multiple imputation is an extremely effective imputation technique.
Jason Van HulseEmail:

Taghi M. Khoshgoftaar   is a professor of the Department of Computer Science and Engineering, Florida Atlantic University and the Director of the Empirical Software Engineering and Data Mining and Machine Learning Laboratories. His research interests are in software engineering, software metrics, software reliability and quality engineering, computational intelligence, computer performance evaluation, data mining, machine learning, and statistical modeling. He has published more than 300 refereed papers in these areas. He is a member of the IEEE, IEEE Computer Society, and IEEE Reliability Society. He was the program chair and General Chair of the IEEE International Conference on Tools with Artificial Intelligence in 2004 and 2005 respectively. He has served on technical program committees of various international conferences, symposia, and workshops. Also, he has served as North American Editor of the Software Quality Journal, and is on the editorial boards of the journals Software Quality and Fuzzy systems. Jason Van Hulse   received the Ph.D. degree in Computer Engineering from the Department of Computer Science and Engineering at Florida Atlantic University in 2007, the M.A. degree in Mathematics from Stony Brook University in 2000, and the B.S. degree in Mathematics from the University at Albany in 1997. His research interests include data mining and knowledge discovery, machine learning, computational intelligence, and statistics. He has published numerous peer-reviewed research papers in various conferences and journals, and is a member of the IEEE, IEEE Computer Society, and ACM. He has worked in the data mining and predictive modeling field at First Data Corp. since 2000, and is currently Vice President, Decision Science.   相似文献   
7.
The liquid-liquid equilibrium of polyethylene glycol dimethyl ether 2000 (PEGDME2000)+K2HPO4+H2O system has been determined experimentally at T=(298.15,303.15,308.15 and 318.15) K. The liquid-solid and complete phase diagram of this system was also obtained at T=(298.15 and 308.15) K. A nonlinear temperature dependent equation was successfully used for the correlation of the experimental binodal data. Furthermore, a temperature dependent Setschenow-type equation was successfully used for the correlation of the tie-lines of the studied system. Moreover, the effect of temperature on the binodal curves and the tie-lines for the investigated aqueous two-phase system have been studied. Also, the free energies of cloud points for this system and some previously studied systems containing PEGDME2000 were calculated from which it was concluded that the increase of the entropy is the driving force for formation of aqueous two-phase systems. Additionally, the calculated free energies for phase separation of the studied systems were used to investigate the salting-out ability of the salts having different anions. Furthermore, the complete phase diagram of the investigated system was compared with the corresponding phase diagrams of previously studied systems, in which the PEGDME2000 has been used, in order to obtain some information regarding the phase behavior of these PEGDME2000+salt+water systems.  相似文献   
8.
A resource investment problem with discounted cash flows (RIPDCF) is a project-scheduling problem in which (a) the availability levels of the resources are considered decision variables and (b) the goal is to find a schedule such that the net present value of the project cash flows optimizes. In this paper, the RIPDCF in which the activities are subject to generalized precedence relations is first modeled. Then, a genetic algorithm (GA) is proposed to solve this model. In addition, design of experiments and response surface methodology are employed to both tune the GA parameters and to evaluate the performance of the proposed method in 240 test problems. The results of the performance analysis show that the efficiency of the proposed GA method is relatively well.  相似文献   
9.
Software product and process metrics can be useful predictorsof which modules are likely to have faults during operations.Developers and managers can use such predictions by softwarequality models to focus enhancement efforts before release.However, in practice, software quality modeling methods in theliterature may not produce a useful balance between the two kindsof misclassification rates, especially when there are few faultymodules.This paper presents a practical classificationrule in the context of classification tree models that allowsappropriate emphasis on each type of misclassification accordingto the needs of the project. This is especially important whenthe faulty modules are rare.An industrial case study using classification trees, illustrates the tradeoffs.The trees were built using the TREEDISC algorithm whichis a refinement of the CHAID algorithm. We examinedtwo releases of a very large telecommunications system, and builtmodels suited to two points in the development life cycle: theend of coding and the end of beta testing. Both trees had onlyfive significant predictors, out of 28 and 42 candidates, respectively.We interpreted the structure of the classification trees, andwe found the models had useful accuracy.  相似文献   
10.
In this paper, a multiproduct inventory control problem is considered in which the periods between two replenishments of the products are assumed independent random variables, and increasing and decreasing functions are assumed to model the dynamic demands of each product. Furthermore, the quantities of the orders are assumed integer-type, space and budget are constraints, the service-level is a chance-constraint, and that the partial back-ordering policy is taken into account for the shortages. The costs of the problem are holding, purchasing, and shortage. We show the model of this problem is an integer nonlinear programming type and to solve it, a harmony search approach is used. At the end, three numerical examples of different sizes are given to demonstrate the applicability of the proposed methodology in real world inventory control problems, to validate the results obtained, and to compare its performances with the ones of both a genetic and a particle swarm optimization algorithms.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号