首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   766篇
  免费   26篇
电工技术   4篇
综合类   3篇
化学工业   140篇
金属工艺   6篇
机械仪表   37篇
建筑科学   54篇
矿业工程   1篇
能源动力   10篇
轻工业   166篇
水利工程   7篇
石油天然气   2篇
无线电   29篇
一般工业技术   171篇
冶金工业   71篇
原子能技术   5篇
自动化技术   86篇
  2023年   2篇
  2022年   2篇
  2021年   8篇
  2020年   11篇
  2019年   7篇
  2018年   16篇
  2017年   15篇
  2016年   12篇
  2015年   23篇
  2014年   19篇
  2013年   46篇
  2012年   70篇
  2011年   66篇
  2010年   29篇
  2009年   27篇
  2008年   61篇
  2007年   56篇
  2006年   43篇
  2005年   25篇
  2004年   37篇
  2003年   30篇
  2002年   23篇
  2001年   25篇
  2000年   18篇
  1999年   6篇
  1998年   20篇
  1997年   23篇
  1996年   11篇
  1995年   6篇
  1994年   7篇
  1993年   6篇
  1992年   2篇
  1991年   3篇
  1989年   3篇
  1988年   3篇
  1986年   2篇
  1984年   4篇
  1982年   2篇
  1981年   1篇
  1980年   3篇
  1979年   1篇
  1976年   4篇
  1974年   2篇
  1973年   1篇
  1967年   2篇
  1966年   1篇
  1965年   1篇
  1963年   1篇
  1956年   1篇
  1937年   1篇
排序方式: 共有792条查询结果,搜索用时 15 毫秒
51.
52.
Film archives are continuously in need of automatic restoration tools to accelerate the correction of film artifacts and to decrease the costs. Blotches are a common type of film degradation and their correction needs a lot of manual interaction in traditional systems due to high false detection rates and the huge amount of data of high resolution images. Blotch detectors need reliable motion estimation to avoid the false detection of uncorrupted regions. In case of erroneous detection, usually an operator has to remove the false alarms manually, which significantly decreases the efficiency of the restoration process. To reduce manual intervention, we developed a two-step false alarm reduction technique including pixel- and object-based methods as post-processing. The proposed pixel-based algorithm compensates motion, decreasing false alarms at low computational cost, while the following object based method further reduces the residual false alarms by machine learning techniques. We introduced a new quality metric for detection methods by measuring the required amount of manual work after the automatic detection. In our novel evaluation technique, the ground truth is collected from digitized archive sequences where defective pixel positions are detected in an interactive process.  相似文献   
53.
Sparse on-line gaussian processes   总被引:7,自引:0,他引:7  
We develop an approach for sparse representations of gaussian process (GP) models (which are Bayesian types of kernel machines) in order to overcome their limitations for large data sets. The method is based on a combination of a Bayesian on-line algorithm, together with a sequential construction of a relevant subsample of the data that fully specifies the prediction of the GP model. By using an appealing parameterization and projection techniques in a reproducing kernel Hilbert space, recursions for the effective parameters and a sparse gaussian approximation of the posterior process are obtained. This allows for both a propagation of predictions and Bayesian error measures. The significance and robustness of our approach are demonstrated on a variety of experiments.  相似文献   
54.
The main aim of this paper is to predict NO and NO2 concentrations 4 days in advance by comparing two artificial intelligence learning methods, namely, multi-layer perceptron and support vector machines, on two kinds of spatial embedding of the temporal time series. Hourly values of NO and NO2 concentrations, as well as meteorological variables were recorded in a cross-road monitoring station with heavy traffic in Szeged, in order to build a model for predicting NO and NO2 concentrations several hours in advance. The prediction of NO and NO2 concentrations was performed partly on the basis of their past values, and partly on the basis of temperature, humidity and wind speed data. Since NO can be predicted more accurately, its values were considered primarily when forecasting NO2. Time series prediction can be interpreted in a way that is suitable for artificial intelligence learning. Two effective learning methods, namely, multi-layer perceptron and support vector regression are used to provide efficient non-linear models for NO and NO2 time series predictions. Multi-layer perceptron is widely used to predict these time series, but support vector regression has not yet been applied for predicting NO and NO2 concentrations. Three commonly used linear algorithms were considered as references: 1-day persistence, average of several day persistence and linear regression. Based on the good results of the average of several day persistence, a prediction scheme was introduced, which forms weighted averages instead of simple ones. The optimization of these weights was performed with linear regression in linear case and with the learning methods mentioned in non-linear case. Concerning the NO predictions, the non-linear learning methods give significantly better predictions than the reference linear methods. In the case of NO2, the improvement of the prediction is considerable, however, it is less notable than for NO.  相似文献   
55.
Pattern matching is a simple method for studying the properties of information sources based on individual sequences (Wyner et al 1998 IEEE Trans. Inf. Theory 44 2045-56). In particular, the normalized Lempel-Ziv complexity (Lempel and Ziv 1976 IEEE Trans. Inf. Theory 22 75-88), which measures the rate of generation of new patterns along a sequence, is closely related to such important source properties as entropy and information compression ratio. We make use of this concept to characterize the responses of neurons of the primary visual cortex to different kinds of stimulus, including visual stimulation (sinusoidal drifting gratings) and intracellular current injections (sinusoidal and random currents), under two conditions (in vivo and in vitro preparations). Specifically, we digitize the neuronal discharges with several encoding techniques and employ the complexity curves of the resulting discrete signals as fingerprints of the stimuli ensembles. Our results show, for example, that if the neural discharges are encoded with a particular one-parameter method ('interspike time coding'), the normalized complexity remains constant within some classes of stimuli for a wide range of the parameter. Such constant values of the normalized complexity allow then the differentiation of the stimuli classes. With other encodings (e.g. 'bin coding'), the whole complexity curve is needed to achieve this goal. In any case, it turns out that the normalized complexity of the neural discharges in vivo are higher (and hence carry more information in the sense of Shannon) than in vitro for the same kind of stimulus.  相似文献   
56.
The long-term variability of the fetal heart rate (FHR) provides valuable information on the fetal health status. The routine clinical FHR measurements are usually carried out by the means of ultrasound cardiography. Although the frequent FHR monitoring is recommendable, the high quality ultrasound devices are so expensive that they are not available for home care use. The passive and fully non-invasive acoustic recording called phonocardiography, provides an alternative low-cost measurement method. Unfortunately, the acoustic signal recorded on the maternal abdominal surface is heavily loaded by noise, thus the determination of the FHR raises serious signal processing issues. The development of an accurate and robust fetal phonocardiograph has been since long researched. This paper presents a novel two-channel phonocardiographic device and an advanced signal processing method for determination of the FHR. The developed system provided 83% accuracy compared to the simultaneously recorded reference ultrasound measurements.  相似文献   
57.
This paper presents a new stochastic particle model for efficient and unbiased Monte Carlo rendering of heterogeneous participating media. We randomly add and remove material particles to obtain a density with which free flight sampling and transmittance estimation are simple, while material particle properties are simultaneously modified to maintain the true expectation of the radiance. We show that meeting this requirement may need the introduction of light particles with negative energy and materials with negative extinction, and provide an intuitive interpretation for such phenomena. Unlike previous unbiased methods, the proposed approach does not require a‐priori knowledge of the maximum medium density that is typically difficult to obtain for procedural models. However, the method can benefit from an approximate knowledge of the density, which can usually be acquired on‐the‐fly at little extra cost and can greatly reduce the variance of the proposed estimators. The introduced mechanism can be integrated in participating media renderers where transmittance estimation and free flight sampling are building blocks. We demonstrate its application in a multiple scattering particle tracer, in transmittance computation, and in the estimation of the inhomogeneous air‐light integral.  相似文献   
58.
A Secure Elliptic Curve-Based RFID Protocol   总被引:3,自引:0,他引:3       下载免费PDF全文
Nowadays, the use of Radio Frequency Identification (RFID) systems in industry and stores has increased. Nevertheless, some of these systems present privacy problems that may discourage potential users. Hence, high confidence and effient privacy protocols are urgently needed. Previous studies in the literature proposed schemes that are proven to be secure, but they have scalability problems. A feasible and scalable protocol to guarantee privacy is presented in this paper. The proposed protocol uses elliptic curve cryptography combined with a zero knowledge-based authentication scheme. An analysis to prove the system secure, and even forward secure is also provided. This work is supported by the Generalitat de Catalunya under Grant No. FIC 2007FIC 00880, and the projects of the Spanish MCyT MTM2007-66842-C02-02 and TIN2006-15662-C02-02.  相似文献   
59.
This paper deals with the modeling of a robot with a new kinematicdesign. All joint movements in this robot are transmitted via concentrictubes and bevel gears. D'Alembert's principle is used to derivethe equations of motion for rigid and elastic models, consideringthe elastic deflections of the arms and inner tubes. Implementing therecursive algorithm, both symbolical and numerical methods were tested.The advantages and problems are discussed. In order to verify the model,simulation results are compared with measurements.  相似文献   
60.
Rendering inhomogeneous participating media requires a lot of volume samples since the extinction coefficient needs to be integrated along light paths. Ray marching makes small steps, which is time consuming and leads to biased algorithms. Woodcocklike approaches use analytic sampling and a random rejection scheme guaranteeing that the expectations will be the same as in the original model. These models and the application of control variates for the extinction have been successful to compute transmittance and single scattering but were not fully exploited in multiple scattering simulation. Our paper attacks the multiple scattering problem in heterogeneous media and modifies the light–medium interaction model to allow the use of simple analytic formulae while preserving the correct expected values. The model transformation reduces the variance of the estimates with the help of Rao‐Blackwellization and control variates applied both for the extinction coefficient and the incident radiance. Based on the transformed model, efficient Monte Carlo rendering algorithms are obtained.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号