全文获取类型
收费全文 | 341篇 |
免费 | 28篇 |
国内免费 | 13篇 |
专业分类
电工技术 | 13篇 |
综合类 | 23篇 |
化学工业 | 24篇 |
金属工艺 | 4篇 |
机械仪表 | 26篇 |
建筑科学 | 13篇 |
矿业工程 | 2篇 |
能源动力 | 8篇 |
轻工业 | 9篇 |
水利工程 | 3篇 |
石油天然气 | 1篇 |
武器工业 | 11篇 |
无线电 | 29篇 |
一般工业技术 | 57篇 |
冶金工业 | 1篇 |
原子能技术 | 4篇 |
自动化技术 | 154篇 |
出版年
2023年 | 5篇 |
2022年 | 4篇 |
2021年 | 10篇 |
2020年 | 9篇 |
2019年 | 5篇 |
2018年 | 8篇 |
2017年 | 9篇 |
2016年 | 18篇 |
2015年 | 6篇 |
2014年 | 23篇 |
2013年 | 23篇 |
2012年 | 16篇 |
2011年 | 33篇 |
2010年 | 23篇 |
2009年 | 16篇 |
2008年 | 25篇 |
2007年 | 40篇 |
2006年 | 25篇 |
2005年 | 19篇 |
2004年 | 9篇 |
2003年 | 13篇 |
2002年 | 7篇 |
2001年 | 7篇 |
2000年 | 1篇 |
1999年 | 3篇 |
1998年 | 2篇 |
1997年 | 4篇 |
1996年 | 4篇 |
1995年 | 1篇 |
1993年 | 4篇 |
1992年 | 1篇 |
1991年 | 1篇 |
1990年 | 2篇 |
1987年 | 1篇 |
1985年 | 1篇 |
1984年 | 1篇 |
1976年 | 1篇 |
1972年 | 1篇 |
1971年 | 1篇 |
排序方式: 共有382条查询结果,搜索用时 0 毫秒
1.
Javier Roca-Pardiñas Carmen Cadarso-Suárez María J. Lado 《Computational statistics & data analysis》2008,52(4):1958-1970
In many applications, the joint effect of two continuous covariates on the target binary response may vary across groups defined by levels of a given factor. A testing procedure that would enable this type of surface-by-factor interactions to be detected has been designed. To accomplish this goal, a logistic generalized additive model (GAM) with bivariate continuous interactions varying across groups defined by levels of a factor is considered. A local scoring algorithm based on local linear kernel smoothers was implemented to estimate the proposed logistic GAM. Bootstrap resampling techniques were used for the purpose of testing for factor-by-surface interactions. Given the high computational cost involved, binning techniques were used to speed up computation in the estimation and testing processes. The adequacy of the bootstrap-based test was assessed by means of a simulation study. If a factor-by-surface interaction is detected in the model, it is then established that the use of the odds-ratio curves is very useful in obtaining a direct interpretation of the fitted model. The benefits of using this methodology when analyzing real data are illustrated by applying the technique to the outputs produced by a computerized system dedicated to the early detection of breast cancer. 相似文献
2.
Patrick Musonda 《Computational statistics & data analysis》2008,52(4):1942-1957
Second-order expressions for the asymptotic bias and variance of the log relative incidence estimator are derived for the self-controlled case series model in a simplified scenario. The dependence of the bias and variance on factors such as the relative incidence and ratio of risk to observation period are studied. Small-sample performance of the estimator in realistic scenarios is investigated using simulations. It is found that, in scenarios likely to arise in practice, asymptotic methods are valid for numbers of cases in excess of 20-50 depending on the ratio of the risk period to the observation period and on the relative incidence. The application of Monte Carlo methods to self-controlled case series analyses is also discussed. 相似文献
3.
G. Mestres F. Niefloud R. Fortune J. M. Devoisselle R. Marti H. Maillols 《Drug development and industrial pharmacy》1996,22(12):1193-1199
This paper reports a study undertaken using techniques of static and dynamic light scattering to investigate the influence of sodium salicylate and methyl salicylate on droplet size of oil-in-water emulsions. The rates of changes were measured by determining the size and distribution of the oil droplet in the material. All emulsions showed a bimodal size distribution; the mean diameters and polydispersity were calculated from intensity. These data were analyzed with nonlinear regressions and bootstrap methodology. An amount of methyl salicylate component induced a decrease of mean diameter and standard deviation. On the contrary, sodium salicylate entailed the growth of all droplet populations and coalescence for the highest concentration. 相似文献
4.
ExPosition is a new comprehensive R package providing crisp graphics and implementing multivariate analysis methods based on the singular value decomposition (svd). The core techniques implemented in ExPosition are: principal components analysis, (metric) multidimensional scaling, correspondence analysis, and several of their recent extensions such as barycentric discriminant analyses (e.g., discriminant correspondence analysis), multi-table analyses (e.g.,multiple factor analysis, Statis, and distatis), and non-parametric resampling techniques (e.g., permutation and bootstrap). Several examples highlight the major differences between ExPosition and similar packages. Finally, the future directions of ExPosition are discussed. 相似文献
5.
Use of zero-inflated count data models is common in applications where the number of zero counts exceeds that predicted from a traditional count data model such as Poisson or negative binomial. When count data exhibiting inflated zero counts are correlated among subjects, a natural approach will be to fit a marginal model with the help of generalized estimating equations (GEE) that can incorporate subject-to-subject correlations. A GEE based zero-inflated negative binomial (ZINB) model is proposed to fit clustered counts with excessive zeros. However, the corresponding sandwich variance estimator appears to underestimate the true variance. The theoretical reasons for its failure are explained and a correction under additional modeling assumptions is offered. In addition, a clustered resampling (bootstrap) procedure is proposed to estimate the variance and it is shown that the bootstrap procedure captures the correct variance under no additional model assumptions. Utility of this marginal GEE based ZINB model over two other competing models has been assessed using a thorough simulation study. The resulting inference procedure is applied to study the association between the dental caries and fluoride exposures using a dataset extracted from the Iowa Fluoride Study. A number of risk factors of clinical significance are reliably identified using the proposed model. 相似文献
6.
If the production process, production equipment, or material changes, it becomes necessary to execute pilot runs before mass production in manufacturing systems. Using the limited data obtained from pilot runs to shorten the lead time to predict future production is this worthy of study. Although, artificial neural networks are widely utilized to extract management knowledge from acquired data, sufficient training data is the fundamental assumption. Unfortunately, this is often not achievable for pilot runs because there are few data obtained during trial stages and theoretically this means that the knowledge obtained is fragile. The purpose of this research is to utilize bootstrap to generate virtual samples to fill the information gaps of sparse data. The results of this research indicate that the prediction error rate can be significantly decreased by applying the proposed method to a very small data set. 相似文献
7.
Learning from imperfect (noisy) information sources is a challenging and reality issue for many data mining applications. Common practices include data quality enhancement by applying data preprocessing techniques or employing robust learning algorithms to avoid developing overly complicated structures that overfit the noise. The essential goal is to reduce noise impact and eventually enhance the learners built from noise-corrupted data. In this paper, we propose a novel corrective classification (C2) design, which incorporates data cleansing, error correction, Bootstrap sampling and classifier ensembling for effective learning from noisy data sources. C2 differs from existing classifier ensembling or robust learning algorithms in two aspects. On one hand, a set of diverse base learners of C2 constituting the ensemble are constructed via a Bootstrap sampling process; on the other hand, C2 further improves each base learner by unifying error detection, correction and data cleansing to reduce noise impact. Being corrective, the classifier ensemble is built from data preprocessed/corrected by the data cleansing and correcting modules. Experimental comparisons demonstrate that C2 is not only more accurate than the learner built from original noisy sources, but also more reliable than Bagging [4] or aggressive classifier ensemble (ACE) [56], which are two degenerated components/variants of C2. The comparisons also indicate that C2 is more stable than Boosting and DECORATE, which are two state-of-the-art ensembling methods. For real-world imperfect information sources (i.e. noisy training and/or test data), C2 is able to deliver more accurate and reliable prediction models than its other peers can offer. 相似文献
8.
Guillermo Mendez 《Computational statistics & data analysis》2011,55(11):2937-2950
Random forest, a data-mining technique which uses multiple classification or regression trees, is a popular algorithm used for prediction. Inference and goodness-of-fit assessment, however, may require an estimator of variability; in many applications the residual variance is of primary interest. This paper proposes two estimators of residual variance for random forest regression that take advantage of byproducts of the algorithm. The first estimator is based on the residual sum of squares from a random forest fit and uses a bootstrap bias correction. The second estimator is a difference-based estimator that uses proximity measures as weights. The estimators are evaluated through Monte Carlo simulations. Applications of the methods to the problem of assessing the relative variability of males and females on cognitive and achievement tests are discussed, and the methods are applied to estimate the residual variance in test scores for male and female students on the mathematics portion of the 2007 Arizona Instrument to Measure Standards. 相似文献
9.
This study aims to contribute to the definition of a methodology, which can help to select a relevant roughness parameter with a view to describing the topography of orthopaedic bearing surfaces. In this investigation, the surface topography of a retrieved titanium alloy (TA6V) femoral head was characterized using visual inspection, optical microscopy and three-dimensional contacting profilometry. A numerical analysis of roughness measurements was then undertaken to assess in a first step the values of different roughness parameters of interest found in papers dealing with the topography of orthopaedic bearing surfaces. In a second step, the Analysis of Variance (ANOVA) and the Computer-Based Bootstrap Method were combined to determine statistically, and without preconceived opinion, which of those parameters is the most relevant to describe the different investigated worn regions of the studied femoral head. 相似文献
10.
针对单词簇上训练朴素贝叶斯文本分类器概率估计偏差较大所导致的分类精度较低问题.在使用概率分布聚类算法得到的单词簇的基础上,根据单词与簇间互信息建立有序单词子序列,采用有放回随机抽样对单词序列构造规模相当的样本集并将估计出的参数的平均值作为训练得到的最终参数对未知文本进行分类.公共文本实验数据集上的实验结果表明,提出的训练方法相对于传统的朴素贝叶斯分类器训练方法能够获得更高的分类精度且过程相对简单. 相似文献