首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   11135篇
  免费   1306篇
  国内免费   901篇
电工技术   676篇
综合类   1359篇
化学工业   821篇
金属工艺   309篇
机械仪表   606篇
建筑科学   808篇
矿业工程   391篇
能源动力   383篇
轻工业   1047篇
水利工程   587篇
石油天然气   388篇
武器工业   120篇
无线电   728篇
一般工业技术   1155篇
冶金工业   350篇
原子能技术   54篇
自动化技术   3560篇
  2024年   42篇
  2023年   188篇
  2022年   400篇
  2021年   443篇
  2020年   472篇
  2019年   435篇
  2018年   388篇
  2017年   465篇
  2016年   503篇
  2015年   502篇
  2014年   707篇
  2013年   866篇
  2012年   827篇
  2011年   952篇
  2010年   665篇
  2009年   673篇
  2008年   677篇
  2007年   746篇
  2006年   607篇
  2005年   480篇
  2004年   377篇
  2003年   336篇
  2002年   277篇
  2001年   231篇
  2000年   153篇
  1999年   129篇
  1998年   131篇
  1997年   118篇
  1996年   99篇
  1995年   84篇
  1994年   52篇
  1993年   59篇
  1992年   37篇
  1991年   47篇
  1990年   39篇
  1989年   22篇
  1988年   19篇
  1987年   9篇
  1986年   21篇
  1985年   4篇
  1984年   16篇
  1983年   7篇
  1982年   6篇
  1981年   3篇
  1980年   11篇
  1978年   2篇
  1966年   2篇
  1961年   3篇
  1959年   1篇
  1957年   1篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
91.
In the era of big data, traditional regression models cannot deal with uncertain big data efficiently and accurately. In order to make up for this deficiency, this paper proposes a quantum fuzzy regression model, which uses fuzzy theory to describe the uncertainty in big data sets and uses quantum computing to exponentially improve the efficiency of data set preprocessing and parameter estimation. In this paper, data envelopment analysis (DEA) is used to calculate the degree of importance of each data point. Meanwhile, Harrow, Hassidim and Lloyd (HHL) algorithm and quantum swap circuits are used to improve the efficiency of high-dimensional data matrix calculation. The application of the quantum fuzzy regression model to small-scale financial data proves that its accuracy is greatly improved compared with the quantum regression model. Moreover, due to the introduction of quantum computing, the speed of dealing with high-dimensional data matrix has an exponential improvement compared with the fuzzy regression model. The quantum fuzzy regression model proposed in this paper combines the advantages of fuzzy theory and quantum computing which can efficiently calculate high-dimensional data matrix and complete parameter estimation using quantum computing while retaining the uncertainty in big data. Thus, it is a new model for efficient and accurate big data processing in uncertain environments.  相似文献   
92.
激光陀螺是捷联惯导系统的理想元件,并广泛应用于航空、航天、航海以及地面定位定向等方面;但是,激光陀螺对温度十分敏感,温度的变化会造成激光陀螺零偏的变化,最终影响捷联系统的初始对准和导航精度;所以当要求激光陀螺工作在高精度的场合时,必须采取必要的温度误差补偿措施;通过对激光陀螺进行大量的温度试验,分析了温度及温变速率对激光陀螺零偏的影响规律,提出了激光陀螺温度补偿模型;经试验验证,此模型能在一定程度上改善温度对激光陀螺精度的影响,为进一步提高激光陀螺的精度打下基础。  相似文献   
93.
We study ordinal embedding relaxations in the realm of parameterized complexity. We prove the existence of a quadratic kernel for the Betweenness problem parameterized above its tight lower bound, which is stated as follows. For a set V of variables and set C of constraints “vi is between vj and vk”, decide whether there is a bijection from V to the set {1,…,|V|} satisfying at least |C|/3+κ of the constraints in C. Our result solves an open problem attributed to Benny Chor in Niedermeier's monograph “Invitation to Fixed-Parameter Algorithms”. The betweenness problem is of interest in molecular biology. An approach developed in this paper can be used to determine parameterized complexity of a number of other optimization problems on permutations parameterized above or below tight bounds.  相似文献   
94.
Clustering analysis of temporal gene expression data is widely used to study dynamic biological systems, such as identifying sets of genes that are regulated by the same mechanism. However, most temporal gene expression data often contain noise, missing data points, and non-uniformly sampled time points, which imposes challenges for traditional clustering methods of extracting meaningful information. In this paper, we introduce an improved clustering approach based on the regularized spline regression and an energy based similarity measure. The proposed approach models each gene expression profile as a B-spline expansion, for which the spline coefficients are estimated by regularized least squares scheme on the observed data. To compensate the inadequate information from noisy and short gene expression data, we use its correlated genes as the test set to choose the optimal number of basis and the regularization parameter. We show that this treatment can help to avoid over-fitting. After fitting the continuous representations of gene expression profiles, we use an energy based similarity measure for clustering. The energy based measure can include the temporal information and relative changes of the time series using the first and second derivatives of the time series. We demonstrate that our method is robust to noise and can produce meaningful clustering results.  相似文献   
95.
基于核密度估计的活动轮廓模型   总被引:1,自引:0,他引:1       下载免费PDF全文
王玉  黎明  李凌 《计算机工程》2010,36(5):196-198
基于核密度估计的活动轮廓模型如果没有适当的扰动机制,往往不能在弧度突变的边缘上获得较好的收敛结果,且在大噪声环境下鲁棒性较差。针对该问题,提出一个新的代价函数。该函数通过融合边缘映射的曲率信息,改善原算法在突变边缘的收敛效果,降低算法对初始轮廓的依赖。  相似文献   
96.
针对物性参数和近红外光谱数据之间的回归模型的建立问题,基于建立一系列回归器的思想,给出了1种用于多变量校正的Boosting-PLS算法。每个(弱/基本)回归器均建立于原校正集的1个子集上,每个子集均通过原校正集带概率重复采样的方式得到,而样本的概率则由前1个回归器的预测误差确定。大误差的样本将增大概率,以便后续的回归器更集中地对其进行训练。最终的集成回归模型则为弱回归器的加权取中值。通过1个近红外应用实例和与偏最小二乘的比较,证实了Boosting-PLS算法的优良性能,所建校正模型更精确、更稳健,对过拟合不敏感。  相似文献   
97.
李全文  阮波  徐可佳  于勇  肖劲飞 《计算机应用》2010,30(11):2983-2985
在主成分分析(PCA)及核主成分分析(KPCA)进行特征提取基本原理的基础上,提出了一种改进的提取非线性的图像特征来重建图像方法,应用于嵌入式防伪水印图案缺陷的检测。该方法使得图像协方差矩阵维数大幅下降,且有效地保留了嵌入式防伪水印图案的信息,通过比较检测出图像的缺陷。实验结果表明,该方法对输入数据实现了有效的降维,缩短了计算时间,提高了检测效果和精确度。KPCA算法相比原有的PCA算法具有更高的性能指标,适用范围更广。  相似文献   
98.
We describe a fast, data-driven bandwidth selection procedure for kernel conditional density estimation (KCDE). Specifically, we give a Monte Carlo dual-tree algorithm for efficient, error-controlled approximation of a cross-validated likelihood objective. While exact evaluation of this objective has an unscalable O(n2) computational cost, our method is practical and shows speedup factors as high as 286,000 when applied to real multivariate datasets containing up to one million points. In absolute terms, computation times are reduced from months to minutes. This enables applications at much greater scale than previously possible. The core idea in our method is to first derive a standard deterministic dual-tree approximation, whose loose deterministic bounds we then replace with tight, probabilistic Monte Carlo bounds. The resulting Monte Carlo dual-tree algorithm exhibits strong error control and high speedup across a broad range of datasets several orders of magnitude greater in size than those reported in previous work. The cost of this high acceleration is the loss of the formal error guarantee of the deterministic dual-tree framework; however, our experiments show that error is still amply controlled by our Monte Carlo algorithm, and the many-order-of-magnitude speedups are worth this sacrifice in the large-data case, where cross-validated bandwidth selection for KCDE would otherwise be impractical.  相似文献   
99.
Abstract: Data envelopment analysis (DEA) is a non‐parametric method for measuring the efficiency and productivity of decision‐making units (DMUs). On the other hand data mining techniques allow DMUs to explore and discover meaningful, previously hidden information from large databases. Classification and regression (C&R) is the commonly used decision tree in data mining. DEA determines the efficiency scores but cannot give details of factors related to inefficiency, especially if these factors are in the form of non‐numeric variables such as operational style in the banking sector. This paper proposes a framework to combine DEA with C&R for assessing the efficiency and productivity of DMUs. The result of the combined model is a set of rules that can be used by policy makers to discover reasons behind efficient and inefficient DMUs. As a case study, we use the proposed methodology to investigate factors associated with the efficiency of the banking sector in the Gulf Cooperation Council countries.  相似文献   
100.
A scanning electron microscope (SEM) is a sophisticated equipment employed for fine imaging of a variety of surfaces. In this study, prediction models of SEM were constructed by using a generalized regression neural network (GRNN) and genetic algorithm (GA). The SEM components examined include condenser lens 1 and 2 and objective lens (coarse and fine) referred to as CL1, CL2, OL-Coarse, and OL-Fine. For a systematic modeling of SEM resolution (R), a face-centered Box–Wilson experiment was conducted. Two sets of data were collected with or without the adjustment of magnification. Root-mean-squared prediction error of optimized GRNN models are GA 0.481 and 1.96×10-12 for non-adjusted and adjusted data, respectively. The optimized models demonstrated a much improved prediction over statistical regression models. The optimized models were used to optimize parameters particularly under best tuned SEM environment. For the variations in CL2 and OL-Coarse, the highest R could be achieved at all conditions except a larger CL2 either at smaller or larger OL-Coarse. For the variations in CL1 and CL2, the highest R was obtained at all conditions but larger CL2 and smaller CL1.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号