首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1170篇
  免费   38篇
  国内免费   1篇
电工技术   26篇
综合类   1篇
化学工业   255篇
金属工艺   18篇
机械仪表   27篇
建筑科学   40篇
矿业工程   6篇
能源动力   29篇
轻工业   98篇
水利工程   8篇
无线电   182篇
一般工业技术   215篇
冶金工业   30篇
原子能技术   22篇
自动化技术   252篇
  2024年   20篇
  2023年   21篇
  2022年   45篇
  2021年   72篇
  2020年   56篇
  2019年   38篇
  2018年   33篇
  2017年   40篇
  2016年   53篇
  2015年   36篇
  2014年   45篇
  2013年   68篇
  2012年   65篇
  2011年   83篇
  2010年   54篇
  2009年   53篇
  2008年   55篇
  2007年   31篇
  2006年   47篇
  2005年   30篇
  2004年   19篇
  2003年   23篇
  2002年   30篇
  2001年   18篇
  2000年   8篇
  1999年   23篇
  1998年   20篇
  1997年   14篇
  1996年   15篇
  1995年   10篇
  1994年   13篇
  1993年   6篇
  1992年   10篇
  1991年   7篇
  1990年   4篇
  1989年   1篇
  1987年   6篇
  1986年   9篇
  1985年   4篇
  1984年   7篇
  1983年   5篇
  1982年   4篇
  1981年   4篇
  1977年   2篇
  1976年   2篇
排序方式: 共有1209条查询结果,搜索用时 78 毫秒
91.
Using MPC to control middle-vessel continuous distillation columns   总被引:1,自引:0,他引:1  
The use of model predictive control (MPC) in middle-vessel continuous distillation column (MVCC) is discussed. It is shown that using a 5 × 5 MPC implementation (where all levels are included in MPC as integral process variables) allows using a smaller middle-vessel, particularly when disturbances can be measured: a good performance is ensured without having the middle vessel drained or overfilled. Also, it is shown that MPC practically circumvents the issue of tuning the middle-vessel level controller. Furthermore, the MVCC design makes conventional decentralised control perform comparably to MPC.  相似文献   
92.
Neural spike train decoding algorithms and techniques to compute Shannon mutual information are important methods for analyzing how neural systems represent biological signals. Decoding algorithms are also one of several strategies being used to design controls for brain-machine interfaces. Developing optimal strategies to design decoding algorithms and compute mutual information are therefore important problems in computational neuroscience. We present a general recursive filter decoding algorithm based on a point process model of individual neuron spiking activity and a linear stochastic state-space model of the biological signal. We derive from the algorithm new instantaneous estimates of the entropy, entropy rate, and the mutual information between the signal and the ensemble spiking activity. We assess the accuracy of the algorithm by computing, along with the decoding error, the true coverage probability of the approximate 0.95 confidence regions for the individual signal estimates. We illustrate the new algorithm by reanalyzing the position and ensemble neural spiking activity of CA1 hippocampal neurons from two rats foraging in an open circular environment. We compare the performance of this algorithm with a linear filter constructed by the widely used reverse correlation method. The median decoding error for Animal 1 (2) during 10 minutes of open foraging was 5.9 (5.5) cm, the median entropy was 6.9 (7.0) bits, the median information was 9.4 (9.4) bits, and the true coverage probability for 0.95 confidence regions was 0.67 (0.75) using 34 (32) neurons. These findings improve significantly on our previous results and suggest an integrated approach to dynamically reading neural codes, measuring their properties, and quantifying the accuracy with which encoded information is extracted.  相似文献   
93.
With the widespread use of 3D acquisition devices, there is an increasing need of consolidating captured noisy and sparse point cloud data for accurate representation of the underlying structures. There are numerous algorithms that rely on a variety of assumptions such as local smoothness to tackle this ill‐posed problem. However, such priors lead to loss of important features and geometric detail. Instead, we propose a novel data‐driven approach for point cloud consolidation via a convolutional neural network based technique. Our method takes a sparse and noisy point cloud as input, and produces a dense point cloud accurately representing the underlying surface by resolving ambiguities in geometry. The resulting point set can then be used to reconstruct accurate manifold surfaces and estimate surface properties. To achieve this, we propose a generative neural network architecture that can input and output point clouds, unlocking a powerful set of tools from the deep learning literature. We use this architecture to apply convolutional neural networks to local patches of geometry for high quality and efficient point cloud consolidation. This results in significantly more accurate surfaces, as we illustrate with a diversity of examples and comparisons to the state‐of‐the‐art.  相似文献   
94.
In this paper we propose and compare different distributed video coding (DVC) schemes based on the use of the wavelet transform, which naturally allows for spatial and other forms of scalability. In particular, we propose a hybrid encoder which utilizes channel codes, and evaluate its performance in the absence of a feedback channel. The proposed scheme uses statistical models for the estimation of the required bitrate at the encoder. We also propose a scheme that is based on a modulo reduction procedure and does not use channel codes at the receiver/transmitter. These schemes are compared with more conventional coders that do not or only partially exploit the distributed coding paradigm. Experimental results show that the considered schemes have good performance when compared with similar asymmetric video compression schemes, and that DVC can be an interesting option in appropriate scenarios.  相似文献   
95.
We propose a generic online multi-target track-before-detect (MT-TBD) that is applicable on confidence maps used as observations. The proposed tracker is based on particle filtering and automatically initializes tracks. The main novelty is the inclusion of the target ID in the particle state, enabling the algorithm to deal with unknown and large number of targets. To overcome the problem of mixing IDs of targets close to each other, we propose a probabilistic model of target birth and death based on a Markov Random Field (MRF) applied to the particle IDs. Each particle ID is managed using the information carried by neighboring particles. The assignment of the IDs to the targets is performed using Mean-Shift clustering and supported by a Gaussian Mixture Model. We also show that the computational complexity of MT-TBD is proportional only to the number of particles. To compare our method with recent state-of-the-art works, we include a postprocessing stage suited for multi-person tracking. We validate the method on real-world and crowded scenarios, and demonstrate its robustness in scenes presenting different perspective views and targets very close to each other.  相似文献   
96.
In Very Long Baseline Interferometry, signals from far radio sources are simultaneously recorded at different antennas, with the purpose of investigating their physical properties. The recorded signals are generally modeled as realizations of Gaussian processes, whose power is dominated by the system noise at the receiving antennas. The actual signal coming from the radio source can be detected only after cross-correlation of the various data-streams. The signals received at each antenna are digitized after low noise amplification and frequency down-conversion, in order to allow subsequent digital post-processing. The applied quantization is coarse, 1 or 2 bits being generally associated to the signal amplitude. In modern applications the sampling is typically performed at a high rate, and subchannels are then generated by filtering, followed by decimation and requantization of the signal streams. The redigitized streams are then cross-correlated to extract the physical observables. While the classical effect of quantization has widely been studied in the past, the decorrelation induced by the filtering and requantization process is still characterized experimentally, mainly due to its inherent mathematical complexity. In the present work we analyze the above problem, and provide algorithms and analytical formulas aimed at predicting the induced decorrelation for a wide class of quantization schemes, with the unique assumption of weakly correlated signals, typically fulfilled in VLBI and radio astronomy applications.  相似文献   
97.
98.
99.
We propose two models for improving the performance of rule-based classification under unbalanced and highly imprecise domains. Both models are probabilistic frameworks aimed to boost the performance of basic rule-based classifiers. The first model implements a global-to-local scheme, where the response of a global rule-based classifier is refined by performing a probabilistic analysis of the coverage of its rules. In particular, the coverage of the individual rules is used to learn local probabilistic models, which ultimately refine the predictions from the corresponding rules of the global classifier. The second model implements a dual local-to-global strategy, in which single classification rules are combined within an exponential probabilistic model in order to boost the overall performance as a side effect of mutual influence. Several variants of the basic ideas are studied, and their performances are thoroughly evaluated and compared with state-of-the-art algorithms on standard benchmark datasets.  相似文献   
100.
We examine four approaches for dealing with the logical omniscience problem and their potential applicability: the syntactic approach, awareness, algorithmic knowledge, and impossible possible worlds. Although in some settings these approaches are equi-expressive and can capture all epistemic states, in other settings of interest (especially with probability in the picture), we show that they are not equi-expressive. We then consider the pragmatics of dealing with logical omniscience—how to choose an approach and construct an appropriate model.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号