首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   3290篇
  免费   266篇
  国内免费   5篇
电工技术   25篇
综合类   4篇
化学工业   657篇
金属工艺   62篇
机械仪表   84篇
建筑科学   147篇
矿业工程   11篇
能源动力   81篇
轻工业   444篇
水利工程   30篇
石油天然气   5篇
无线电   289篇
一般工业技术   687篇
冶金工业   386篇
原子能技术   8篇
自动化技术   641篇
  2023年   48篇
  2022年   37篇
  2021年   105篇
  2020年   83篇
  2019年   79篇
  2018年   148篇
  2017年   167篇
  2016年   171篇
  2015年   116篇
  2014年   152篇
  2013年   314篇
  2012年   256篇
  2011年   207篇
  2010年   196篇
  2009年   175篇
  2008年   161篇
  2007年   133篇
  2006年   101篇
  2005年   96篇
  2004年   67篇
  2003年   73篇
  2002年   75篇
  2001年   34篇
  2000年   25篇
  1999年   32篇
  1998年   86篇
  1997年   68篇
  1996年   36篇
  1995年   36篇
  1994年   32篇
  1993年   26篇
  1992年   13篇
  1991年   8篇
  1990年   7篇
  1989年   7篇
  1988年   15篇
  1987年   12篇
  1986年   15篇
  1985年   13篇
  1984年   9篇
  1983年   9篇
  1982年   12篇
  1981年   7篇
  1980年   10篇
  1979年   8篇
  1978年   6篇
  1976年   12篇
  1974年   6篇
  1973年   8篇
  1960年   5篇
排序方式: 共有3561条查询结果,搜索用时 828 毫秒
101.
Video microscopy is a widely applied diagnostic to investigate the structure and the dynamics of particles in dusty plasmas. Reliable algorithms are required to accurately recover particle positions from the camera images. Here, four different particle positioning techniques have been tested on artificial and experimental data of dusty plasma situations. Two methods that rely on pixel-intensity thresholds were found to be strongly affected by pixel-locking errors and by noise. Two other methods-one applying spatial bandpass filters and the other fitting polynomials to the intensity pattern-yield subpixel resolution under various conditions. These two methods have been shown to be ideally suited to recover particle positions even from small-scale fluctuations that are used to derive the normal mode spectra of finite dust clusters.  相似文献   
102.
This work addresses the soundtrack indexing of multimedia documents. Our purpose is to detect and locate sound unity to structure the audio dataflow in program broadcasts (reports). We present two audio classification tools that we have developed. The first one, a speech music classification tool, is based on three original features: entropy modulation, stationary segment duration (with a Forward–Backward Divergence algorithm) and number of segments. They are merged with the classical 4 Hz modulation energy. It is divided into two classifications (speech/non-speech and music/non-music) and provides more than 90% of accuracy for speech detection and 89% for music detection. The other system, a jingle identification tool, uses an Euclidean distance in the spectral domain to index the audio data flow. Results show that is efficient: among 132 jingles to recognize, we have detected 130. Systems are tested on TV and radio corpora (more than 10 h). They are simple, robust and can be improved on every corpus without training or adaptation.
Régine André-ObrechtEmail:
  相似文献   
103.
The exponential increase of subjective, user-generated content since the birth of the Social Web, has led to the necessity of developing automatic text processing systems able to extract, process and present relevant knowledge. In this paper, we tackle the Opinion Retrieval, Mining and Summarization task, by proposing a unified framework, composed of three crucial components (information retrieval, opinion mining and text summarization) that allow the retrieval, classification and summarization of subjective information. An extensive analysis is conducted, where different configurations of the framework are suggested and analyzed, in order to determine which is the best one, and under which conditions. The evaluation carried out and the results obtained show the appropriateness of the individual components, as well as the framework as a whole. By achieving an improvement over 10% compared to the state-of-the-art approaches in the context of blogs, we can conclude that subjective text can be efficiently dealt with by means of our proposed framework.  相似文献   
104.
There are two main strategies for solving correspondence problems in computer vision: sparse local feature based approaches and dense global energy based methods. While sparse feature based methods are often used for estimating the fundamental matrix by matching a small set of sophistically optimised interest points, dense energy based methods mark the state of the art in optical flow computation. The goal of our paper is to show that this separation into different application domains is unnecessary and can be bridged in a natural way. As a first contribution we present a new application of dense optical flow for estimating the fundamental matrix. Comparing our results with those obtained by feature based techniques we identify cases in which dense methods have advantages over sparse approaches. Motivated by these promising results we propose, as a second contribution, a new variational model that recovers the fundamental matrix and the optical flow simultaneously as the minimisers of a single energy functional. In experiments we show that our coupled approach is able to further improve the estimates of both the fundamental matrix and the optical flow. Our results prove that dense variational methods can be a serious alternative even in classical application domains of sparse feature based approaches.  相似文献   
105.
The tracking of products trajectories involves major challenges in simulation generation and adaptation. Positioning techniques and technologies have become available and affordable to incorporate more deeply into workshop operations. We present our 2-year effort into developing a general framework in location and manufacturing applications. We demonstrate the features of the proposed applications using a case study, a synthetic flexible manufacturing environment, with product-driven policy, which enables the generation of a location data stream of product trajectories over the whole plant. These location data are mined and processed to reproduce the manufacturing system dynamics in an adaptive simulation scheme. This article proposes an original method for the generation of simulation models in discrete event systems. This method uses the product location data in the running system. The data stream of points (product ID, location, and time) is the starting point for the algorithm to generate a queuing network simulation model.  相似文献   
106.
This paper deals with a variant of flowshop scheduling, namely, the hybrid or flexible flowshop with sequence dependent setup times. This type of flowshop is frequently used in the batch production industry and helps reduce the gap between research and operational use. This scheduling problem is NP-hard and solutions for large problems are based on non-exact methods. An improved genetic algorithm (GA) based on software agent design to minimise the makespan is presented. The paper proposes using an inherent characteristic of software agents to create a new perspective in GA design. To verify the developed metaheuristic, computational experiments are conducted on a well-known benchmark problem dataset. The experimental results show that the proposed metaheuristic outperforms some of the well-known methods and the state-of-art algorithms on the same benchmark problem dataset.  相似文献   
107.
A model for teaching psychotherapy theory through an integrative structure.   总被引:1,自引:0,他引:1  
This article discusses a model for teaching psychotherapy theory through an integrative structure from the start of graduate students' training. This model articulates an ordering structure for the reputed 400+ so-called "theories" of psychotherapy. The rationale for such a structure highlights one dimension among several--that is, the recognition that a vast majority of mental health practitioners describe their orientation as eclectic or integrative. Professionals in training are encouraged to use this structure as an organizing principle to create the underpinnings for future professional development. The structure informs all aspects of a graduate-level course, including its syllabus, the textbooks selected, the reader, learning objectives, and tools for learning outcome assessment. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   
108.
 A new laser microstereophotolithography process has been developed in our laboratory to manufacture three-dimensional parts with a high accuracy. As usual in laser stereophotolithography or laser microstereophotolithography, the part is manufactured layer by layer by a light-induced space-resolved polymerization. Until now, in all the already existing microstereophotolithography devices a layer is manufactured vector by vector, by moving the part beneath the initiating light source which remains motionless. We developed a simpler and easier process, in which we can manufacture an entire layer by irradiating its whole surface only once: we used a liquid crystal display as a dynamic generator of masks. In the device we set up, we need only one mobile element, the z translator, all the others are fixed. We manufactured several different 3D microparts, in particular a piece of bevel microgearing with helicoidal cogs, the volume of which is less than half a cubic millimetre. Received: 14 December 1995 / Accepted: 16 September 1996  相似文献   
109.
We address the issue of low-level segmentation for real-valued images. The proposed approach relies on the formulation of the problem in terms of an energy partition of the image domain. In this framework, an energy is defined by measuring a pseudo-metric distance to a source point. Thus, the choice of an energy and a set of sources determines a tessellation of the domain. Each energy acts on the image at a different level of analysis; through the study of two types of energies, two stages of the segmentation process are addressed. The first energy considered, the path variation, belongs to the class of energies determined by minimal paths. Its application as a pre-segmentation method is proposed. In the second part, where the energy is induced by a ultrametric, the construction of hierarchical representations of the image is discussed.  相似文献   
110.
This paper examines the applicability of some learning techniques to the classification of phonemes. The methods tested were artificial neural nets (ANN), support vector machines (SVM) and Gaussian mixture modeling (GMM). We compare these methods with a traditional hidden Markov phoneme model (HMM), working with the linear prediction-based cepstral coefficient features (LPCC). We also tried to combine the learners with linear/nonlinear and unsupervised/supervised feature space transformation methods such as principal component analysis (PCA), independent component analysis (ICA), linear discriminant analysis (LDA), springy discriminant analysis (SDA) and their nonlinear kernel-based counterparts. We found that the discriminative learners can attain the efficiency of HMM, and that after the transformations they can retain the same performance in spite of the severe dimension reduction. The kernel-based transformations brought only marginal improvements compared to their linear counterparts.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号