首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   594篇
  免费   59篇
  国内免费   2篇
电工技术   11篇
综合类   3篇
化学工业   163篇
金属工艺   15篇
机械仪表   25篇
建筑科学   18篇
能源动力   26篇
轻工业   40篇
水利工程   11篇
石油天然气   12篇
无线电   49篇
一般工业技术   130篇
冶金工业   12篇
原子能技术   8篇
自动化技术   132篇
  2023年   11篇
  2022年   25篇
  2021年   29篇
  2020年   27篇
  2019年   22篇
  2018年   44篇
  2017年   44篇
  2016年   40篇
  2015年   27篇
  2014年   25篇
  2013年   54篇
  2012年   49篇
  2011年   48篇
  2010年   38篇
  2009年   35篇
  2008年   27篇
  2007年   15篇
  2006年   12篇
  2005年   15篇
  2004年   8篇
  2003年   6篇
  2002年   5篇
  2001年   6篇
  2000年   4篇
  1999年   4篇
  1998年   2篇
  1997年   5篇
  1996年   2篇
  1995年   2篇
  1994年   2篇
  1993年   1篇
  1992年   1篇
  1988年   2篇
  1986年   2篇
  1985年   2篇
  1984年   3篇
  1983年   4篇
  1982年   3篇
  1981年   1篇
  1977年   1篇
  1973年   1篇
  1970年   1篇
排序方式: 共有655条查询结果,搜索用时 0 毫秒
41.
A new signal processing method is developed for estimating the skew angle in text document images. Detection of the skew angle is an important step in text processing tasks such as optical character recognition (OCR) and computerized filing. Based on a recently introduced multiline-fitting algorithm, the proposed method reformulates the skew detection problem into a special parameter-estimation framework such that a signal structure similar to the one in the field of sensor array processing is obtained. In this framework, straight lines in an image are modeled as wavefronts of propagating planar waves. Certain measurements are defined in this virtual propagation environment such that the large amount of coherency that exists between the locations of the pixels on parallel lines is exploited to enhance a subspace in the space spanned by the measurements. The well-studied techniques of sensor array processing (e.g., the ESPRIT algorithm) are then exploited to produce a closed form and high-resolution estimate for the skew angle.  相似文献   
42.
Pose retrieval of a rigid object from monocular video sequences or images is addressed. Initially, the object pose is estimated in each image assuming flat depth maps. Shape-from-silhouette is then applied to make a 3-D model (volume), which is used for a new round of pose estimations, this time by a model-based method that gives better estimates. Before repeating this process by building a new volume, pose estimates are adjusted to reduce error by maximizing a novel quality factor for shape-from-silhouette volume reconstruction. The feedback loop is terminated when pose estimates do not change much, as compared with those produced by the previous iteration. Based on a theoretical study of the proposed system, a test of convergence to a given set of poses is devised. Reliable performance of the system is also proved by several experiments on both synthetic and real image sequences. No model is assumed for the object and no feature point is detected or tracked as there is no problematic feature matching or correspondence. Our method can be used for 3-D object tracking in video, 3-D modeling, and volume reconstruction from video.  相似文献   
43.
An adaptive spatial filtering method is proposed that takes into account contextual information in fMRI activation detection. This filter replaces the time series of each voxel with a weighted average of time series of a small neighborhood around it. The filter coefficients at each voxel are derived so as to maximize a test statistic designed to indicate the presence of activation. This statistic is the ratio of the energy of the filtered time series in a signal subspace to the energy of the residuals. It is shown that the filter coefficients and the maximum energy ratio can be found through a generalized eigenproblem. This approach equates the filter coefficients to the elements of an eigenvector corresponding to the largest eigenvalue of a specific matrix, while the largest eigenvalue itself becomes the maximum energy ratio that can be used as a statistic for detecting activation. The distribution of this statistic under the null hypothesis is derived by a nonparametric permutation technique in the wavelet domain. Also, in this paper we introduce a new set of basis vectors that define the signal subspace. The space spanned by these basis vectors covers a wide range of possible hemodynamic response functions (HRF) and is applicable to both event related and block design fMRI signal analysis. This approach circumvents the need for a priori assumptions about the exact shape of the HRF. Resting-state experimental fMRI data were used to assess the specificity of the method, showing that the actual false-alarm rate of the proposed method is equal or less than its expected value. Analysis of simulated data and motor task fMRI datasets from six volunteers using the method proposed here showed an improved sensitivity as compared to a conventional test with a similar statistic applied to spatially smoothed data.  相似文献   
44.
A new method is proposed for activation detection in event-related functional magnetic resonance imaging (fMRI). The method is based on the analysis of selected resolution levels (a subspace) in translation invariant wavelet transform (TIWT) domain. Using a priori knowledge about the activation signal and trends, we analyze their power in different resolution levels in TIWT domain and select an optimal set of resolution levels. A randomization-based statistical test is then applied in the wavelet domain for activation detection. This approach suppresses the effects of trends and enhances the detection sensitivity. In addition, since TIWT is insensitive to signal translations, the power analysis is robust with respect to signal shifts. The randomization test alleviates the need for assumptions about fMRI noise. The method has been applied to simulated and experimental fMRI datasets. Comparisons have been made between the results of the proposed method, a similar method in the time domain and the cross-correlation method. The proposed method has shown superior sensitivity compared to the other methods.  相似文献   
45.
46.
47.
Judging by the increasing impact of machine learning on large-scale data analysis in the last decade, one can anticipate a substantial growth in diversity of the machine learning applications for “big data” over the next decade. This exciting new opportunity, however, also raises many challenges. One of them is scaling inference within and training of graphical models. Typical ways to address this scaling issue are inference by approximate message passing, stochastic gradients, and MapReduce, among others. Often, we encounter inference and training problems with symmetries and redundancies in the graph structure. A prominent example are relational models that capture complexity. Exploiting these symmetries, however, has not been considered for scaling yet. In this paper, we show that inference and training can indeed benefit from exploiting symmetries. Specifically, we show that (loopy) belief propagation (BP) can be lifted. That is, a model is compressed by grouping nodes together that send and receive identical messages so that a modified BP running on the lifted graph yields the same marginals as BP on the original one, but often in a fraction of time. By establishing a link between lifting and radix sort, we show that lifting is MapReduce-able. Still, in many if not most situations training relational models will not benefit from this (scalable) lifting: symmetries within models easily break since variables become correlated by virtue of depending asymmetrically on evidence. An appealing idea for such situations is to train and recombine local models. This breaks long-range dependencies and allows to exploit lifting within and across the local training tasks. Moreover, it naturally paves the way for the first scalable lifted training approaches based on stochastic gradients, both in an online and a MapReduced fashion. On several datasets, the online training, for instance, converges to the same quality solution over an order of magnitude faster, simply because it starts optimizing long before having seen the entire mega-example even once.  相似文献   
48.
Experimental confirmation of various grain scattering theories exist but there are still outstanding questions concerning the characterization of microstructure using ultrasound. In this study, high purity niobium serves as a model material devoid of extrinsic scattering centers (e.g., voids, precipitates, second phase particles etc.) A range of microstructures were obtained by annealing in ultrahigh vacuum at different temperatures (600–800°C) after routine cold-rolling and recrystallization heat treatment. Ultrasonic attenuation is measured as a function of a frequency for each sample. For the samples with an intermediate grain size (typically ∼50 μm), attenuation follows a power law dependence on frequency with an exponent of n∼1.6, which is close to the prediction, n=2, of classical Stochastic scattering theory. However, quantitative comparison shows that the observed attenuation is higher than predicted by the classical theory. The Stanke-Kino unified grain scattering theory may provide an explanation for the lower frequency dependence than traditional theories predict, though it still under predicts the magnitude of the attenuation. In any event, the resulting empirical relations provide a useful approach for practical grain size measurement with an acceptable level of uncertainty. The effect of a layered microstructure typical of some sheets/plates is discussed.  相似文献   
49.
Estimating the process capability index (PCI) for non-normal processes has been discussed by many researches. There are two basic approaches to estimating the PCI for non-normal processes. The first commonly used approach is to transform the non-normal data into normal data using transformation techniques and then use a conventional normal method to estimate the PCI for transformed data. This is a straightforward approach and is easy to deploy. The alternate approach is to use non-normal percentiles to calculate the PCI. The latter approach is not easy to implement and a deviation in estimating the distribution of the process may affect the efficacy of the estimated PCI. The aim of this paper is to estimate the PCI for non-normal processes using a transformation technique called root transformation. The efficacy of the proposed technique is assessed by conducting a simulation study using gamma, Weibull, and beta distributions. The root transformation technique is used to estimate the PCI for each set of simulated data. These results are then compared with the PCI obtained using exact percentiles and the Box-Cox method. Finally, a case study based on real-world data is presented.  相似文献   
50.
We develop a fuzzy multi-objective linear programming (FMOLP) model for solving multi-objective mixed-model assembly line problem. In practice, vagueness and imprecision of the goals in this problem make the fuzzy decision-making complicated. The proposed model considers minimizing total utility work, total production rate variation, and total setup cost, using a two-phase linear programming approach. In the first phase, the problem is solved using a max–min approach. The max–min solution not being efficient, in general, we propose a new model in the second phase to maximize a composite satisfaction degree at least as good as the degrees obtained by phase one. To show the effectiveness of the proposed approach, a numerical example is solved and the results are compared with the ones obtained by the fuzzy mixed integer goal programming and weighted additive methods. The computational results show that the proposed FMOLP model achieves lower objective functions as well as higher satisfaction degrees.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号