首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   22478篇
  免费   973篇
  国内免费   109篇
电工技术   325篇
综合类   60篇
化学工业   3726篇
金属工艺   623篇
机械仪表   532篇
建筑科学   538篇
矿业工程   21篇
能源动力   974篇
轻工业   1277篇
水利工程   185篇
石油天然气   224篇
武器工业   4篇
无线电   1653篇
一般工业技术   2598篇
冶金工业   526篇
原子能技术   177篇
自动化技术   10117篇
  2024年   65篇
  2023年   261篇
  2022年   582篇
  2021年   886篇
  2020年   750篇
  2019年   812篇
  2018年   910篇
  2017年   827篇
  2016年   805篇
  2015年   475篇
  2014年   952篇
  2013年   1529篇
  2012年   1653篇
  2011年   3197篇
  2010年   1755篇
  2009年   1529篇
  2008年   1073篇
  2007年   927篇
  2006年   738篇
  2005年   756篇
  2004年   685篇
  2003年   721篇
  2002年   388篇
  2001年   91篇
  2000年   90篇
  1999年   81篇
  1998年   137篇
  1997年   114篇
  1996年   96篇
  1995年   68篇
  1994年   51篇
  1993年   59篇
  1992年   46篇
  1991年   38篇
  1990年   40篇
  1989年   32篇
  1988年   25篇
  1987年   24篇
  1986年   34篇
  1985年   27篇
  1984年   27篇
  1983年   25篇
  1982年   21篇
  1981年   19篇
  1980年   19篇
  1979年   16篇
  1978年   18篇
  1977年   18篇
  1976年   24篇
  1975年   13篇
排序方式: 共有10000条查询结果,搜索用时 31 毫秒
961.
This paper proposes an approach to compute view-normalized body part trajectories of pedestrians walking on potentially non-linear paths. The proposed approach finds applications in gait modeling, gait biometrics, and in medical gait analysis. Our approach uses the 2D trajectories of both feet and the head extracted from the tracked silhouettes. On that basis, it computes the apparent walking (sagittal) planes for each detected gait half-cycle. A homography transformation is then computed for each walking plane to make it appear as if walking was observed from a fronto-parallel view. Finally, each homography is applied to head and feet trajectories over each corresponding gait half-cycle. View normalization makes head and feet trajectories appear as if seen from a fronto-parallel viewpoint, which is assumed to be optimal for gait modeling purposes. The proposed approach is fully automatic as it requires neither manual initialization nor camera calibration. An extensive experimental evaluation of the proposed approach confirms the validity of the normalization process.  相似文献   
962.
One of the simplest, and yet most consistently well-performing set of classifiers is the naïve Bayes models (a special class of Bayesian network models). However, these models rely on the (naïve) assumption that all the attributes used to describe an instance are conditionally independent given the class of that instance. To relax this independence assumption, we have in previous work proposed a family of models, called latent classification models (LCMs). LCMs are defined for continuous domains and generalize the naïve Bayes model by using latent variables to model class-conditional dependencies between the attributes. In addition to providing good classification accuracy, the LCM has several appealing properties, including a relatively small parameter space making it less susceptible to over-fitting. In this paper we take a first step towards generalizing LCMs to hybrid domains, by proposing an LCM for domains with binary attributes. We present algorithms for learning the proposed model, and we describe a variational approximation-based inference procedure. Finally, we empirically compare the accuracy of the proposed model to the accuracy of other classifiers for a number of different domains, including the problem of recognizing symbols in black and white images.  相似文献   
963.
In this paper we offer a variational Bayes approximation to the multinomial probit model for basis expansion and kernel combination. Our model is well-founded within a hierarchical Bayesian framework and is able to instructively combine available sources of information for multinomial classification. The proposed framework enables informative integration of possibly heterogeneous sources in a multitude of ways, from the simple summation of feature expansions to weighted product of kernels, and it is shown to match and in certain cases outperform the well-known ensemble learning approaches of combining individual classifiers. At the same time the approximation reduces considerably the CPU time and resources required with respect to both the ensemble learning methods and the full Markov chain Monte Carlo, Metropolis-Hastings within Gibbs solution of our model. We present our proposed framework together with extensive experimental studies on synthetic and benchmark datasets and also for the first time report a comparison between summation and product of individual kernels as possible different methods for constructing the composite kernel matrix.  相似文献   
964.
In this work, neural network-based models involved in hyperspectral image spectra separation are considered. Focus is on how to select the most highly informative samples for effectively training the neural architecture. This issue is addressed here by several new algorithms for intelligent selection of training samples: (1) a border-training algorithm (BTA) which selects training samples located in the vicinity of the hyperplanes that can optimally separate the classes; (2) a mixed-signature algorithm (MSA) which selects the most spectrally mixed pixels in the hyperspectral data as training samples; and (3) a morphological-erosion algorithm (MEA) which incorporates spatial information (via mathematical morphology concepts) to select spectrally mixed training samples located in spatially homogeneous regions. These algorithms, along with other standard techniques based on orthogonal projections and a simple Maximin-distance algorithm, are used to train a multi-layer perceptron (MLP), selected in this work as a representative neural architecture for spectral mixture analysis. Experimental results are provided using both a database of nonlinear mixed spectra with absolute ground truth and a set of real hyperspectral images, collected at different altitudes by the digital airborne imaging spectrometer (DAIS 7915) and reflective optics system imaging spectrometer (ROSIS) operating simultaneously at multiple spatial resolutions.  相似文献   
965.
The quick advance in image/video editing techniques has enabled people to synthesize realistic images/videos conveniently. Some legal issues may arise when a tampered image cannot be distinguished from a real one by visual examination. In this paper, we focus on JPEG images and propose detecting tampered images by examining the double quantization effect hidden among the discrete cosine transform (DCT) coefficients. To our knowledge, our approach is the only one to date that can automatically locate the tampered region, while it has several additional advantages: fine-grained detection at the scale of 8×8 DCT blocks, insensitivity to different kinds of forgery methods (such as alpha matting and inpainting, in addition to simple image cut/paste), the ability to work without fully decompressing the JPEG images, and the fast speed. Experimental results on JPEG images are promising.  相似文献   
966.
A new dual wing harmonium model that integrates term frequency features and term connection features into a low dimensional semantic space without increase of computation load is proposed for the application of document retrieval. Terms and vectorized graph connectionists are extracted from the graph representation of document by employing weighted feature extraction method. We then develop a new dual wing harmonium model projecting these multiple features into low dimensional latent topics with different probability distributions assumption. Contrastive divergence algorithm is used for efficient learning and inference. We perform extensive experimental verification, and the comparative results suggest that the proposed method is accurate and computationally efficient for document retrieval.  相似文献   
967.
Finite mixture is widely used in the fields of information processing and data analysis. However, its model selection, i.e., the selection of components in the mixture for a given sample data set, has been still a rather difficult task. Recently, the Bayesian Ying-Yang (BYY) harmony learning has provided a new approach to the Gaussian mixture modeling with a favorite feature that model selection can be made automatically during parameter learning. In this paper, based on the same BYY harmony learning framework for finite mixture, we propose an adaptive gradient BYY learning algorithm for Poisson mixture with automated model selection. It is demonstrated well by the simulation experiments that this adaptive gradient BYY learning algorithm can automatically determine the number of actual Poisson components for a sample data set, with a good estimation of the parameters in the original or true mixture where the components are separated in a certain degree. Moreover, the adaptive gradient BYY learning algorithm is successfully applied to texture classification.  相似文献   
968.
We propose in this paper a segmentation process that can deal with noisy discrete objects. A flexible approach considering arithmetic discrete planes with a variable width is used to avoid the over-segmentation that might happen when classical segmentation algorithms based on regular discrete planes are used to decompose the surface of the object. A method to choose a seed and different segmentation strategies according to the shape of the surface is also proposed, as well as an application to smooth the border of convex noisy discrete objects.  相似文献   
969.
Dimensionality reduction is a big challenge in many areas. A large number of local approaches, stemming from statistics or geometry, have been developed. However, in practice these local approaches are often in lack of robustness, since in contrast to maximum variance unfolding (MVU), which explicitly unfolds the manifold, they merely characterize local geometry structure. Moreover, the eigenproblems that they encounter, are hard to solve. We propose a unified framework that explicitly unfolds the manifold and reformulate local approaches as the semi-definite programs instead of the above-mentioned eigenproblems. Three well-known algorithms, locally linear embedding (LLE), laplacian eigenmaps (LE) and local tangent space alignment (LTSA) are reinterpreted and improved within this framework. Several experiments are presented to demonstrate the potential of our framework and the improvements of these local algorithms.  相似文献   
970.
Rotations in the discrete plane are important for many applications such as image matching or construction of mosaic images. We suppose that a digital image A is transformed to another digital image B by a rotation. In the discrete plane, there are many angles giving the rotation from A to B, which we call admissible rotation angles from A to B. For such a set of admissible rotation angles, there exist two angles that achieve the lower and the upper bounds. To find those lower and upper bounds, we use hinge angles as used in Nouvel and Rémila [Incremental and transitive discrete rotations, in: R. Reulke, U. Eckardt, B. Flash, U. Knauer, K. Polthier (Eds.), Combinatorial Image Analysis, Lecture Notes in Computer Science, vol. 4040, Springer, Berlin, 2006, pp. 199-213]. A sequence of hinge angles is a set of particular angles determined by a digital image in the sense that any angle between two consecutive hinge angles gives the identical rotation of the digital image. We propose a method for obtaining the lower and the upper bounds of admissible rotation angles using hinge angles from a given Euclidean angle or from a pair of corresponding digital images.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号