首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   497篇
  免费   19篇
  国内免费   2篇
电工技术   5篇
化学工业   104篇
金属工艺   6篇
机械仪表   16篇
建筑科学   7篇
矿业工程   2篇
能源动力   47篇
轻工业   22篇
水利工程   5篇
石油天然气   6篇
无线电   54篇
一般工业技术   116篇
冶金工业   11篇
原子能技术   2篇
自动化技术   115篇
  2024年   1篇
  2023年   11篇
  2022年   25篇
  2021年   44篇
  2020年   28篇
  2019年   25篇
  2018年   36篇
  2017年   21篇
  2016年   15篇
  2015年   21篇
  2014年   25篇
  2013年   49篇
  2012年   34篇
  2011年   18篇
  2010年   15篇
  2009年   22篇
  2008年   18篇
  2007年   16篇
  2006年   14篇
  2005年   10篇
  2004年   8篇
  2003年   10篇
  2002年   10篇
  2001年   7篇
  2000年   4篇
  1999年   3篇
  1998年   6篇
  1997年   2篇
  1996年   1篇
  1995年   1篇
  1994年   1篇
  1993年   1篇
  1992年   3篇
  1991年   1篇
  1989年   2篇
  1985年   2篇
  1983年   3篇
  1981年   1篇
  1980年   1篇
  1976年   1篇
  1974年   2篇
排序方式: 共有518条查询结果,搜索用时 15 毫秒
31.
32.
33.
Dividing-wall column (DWC) is one of the best examples of process intensification, as it can bring significant reduction in the capital invested as well as savings in the operating costs. Conventional ternary separations progressed from the (in-)direct sequences to thermally coupled columns such as Petlyuk configuration, and later to the DWC compact design that integrates the two distillation columns into a single shell. Nevertheless, this integration leads also to changes in the control and operating mode due to the higher number of degrees of freedom.In this work we explore the dynamic optimization and advanced control strategies based on model predictive control (MPC), coupled or not with PID. These structures were enhanced by adding an extra loop controlling the heavy component in the top of the feed side of the column, using the liquid split as manipulated variable, thus implicitly achieving energy minimization. To allow a fair comparison with previously published references, this work considers as a case-study the industrially relevant separation of the mixture benzene–toluene–xylene (BTX) in a DWC.The results show that MPC leads to a significant increase in performance, as compared to previously reported conventional PID controllers within a multi-loop framework. Moreover, the optimization employed by the MPC efficiently accommodates the goal of minimum energy requirements – possible due to the addition of an extra loop – in a transient state. The practical benefits of coupling MPC with PID controllers are also clearly demonstrated.  相似文献   
34.

Position-patch based approaches have been proposed for single-image face hallucination. This paper models the face hallucination problem as a coefficient recovery problem with respect to an adaptive training set for improved noise robustness. The image-adaptive training set is constructed by corrupting a local training set of position-patches by adding specific amounts of noise depending on the input image noise level. In this proposed method, image denoising and super-resolution are simultaneously carried out to obtain superior results. Though the principle is general and can be extended to most super-resolution algorithms, we discuss this in context of existing locality-constrained representation (LcR) approach in order to compare their performances. It can be demonstrated that the proposed approach can quantitatively and qualitatively yield better results in high noisy environments.

  相似文献   
35.
Multimedia Tools and Applications - Biometric authentication can establish a person’s identity from their exclusive features. In general, biometric authentication can vulnerable to spoofing...  相似文献   
36.
Dynamic time warping (DTW) has proven itself to be an exceptionally strong distance measure for time series. DTW in combination with one-nearest neighbor, one of the simplest machine learning methods, has been difficult to convincingly outperform on the time series classification task. In this paper, we present a simple technique for time series classification that exploits DTW’s strength on this task. But instead of directly using DTW as a distance measure to find nearest neighbors, the technique uses DTW to create new features which are then given to a standard machine learning method. We experimentally show that our technique improves over one-nearest neighbor DTW on 31 out of 47 UCR time series benchmark datasets. In addition, this method can be easily extended to be used in combination with other methods. In particular, we show that when combined with the symbolic aggregate approximation (SAX) method, it improves over it on 37 out of 47 UCR datasets. Thus the proposed method also provides a mechanism to combine distance-based methods like DTW with feature-based methods like SAX. We also show that combining the proposed classifiers through ensembles further improves the performance on time series classification.  相似文献   
37.
This research compares two time-series interferometric synthetic aperture radar (InSAR) methods, namely persistent scatterer SAR interferometry (PS-InSAR) and small baseline subset (SBAS) to retrieve the deformation signal from pixels with different scattering characteristics. These approaches are used to estimate the surface deformation in the L’Aquila region in Central Italy where an earthquake of magnitude Mw 6.3 occurred on 6 April 2009. Fourteen Environmental Satellite (ENVISAT) C-band Advanced Synthetic Aperture Radar (ASAR) images, covering the pre-seismic, co-seismic, and post-seismic period, are used for the study. Both the approaches effectively extract measurement pixels and show a similar deformation pattern in which the north-west and south-east regions with respect to the earthquake epicentre show movement in opposite directions. The analysis has revealed that the PS-InSAR method extracted more number of measurement points (21,103 pixels) as compared to the SBAS method (4886 pixels). A comparison of velocity estimates shows that out of 833 common pixels in both the methods, about 62% (517 pixels) have the mean velocity difference below 3 mm year?1 and nearly 66% pixels have difference below 5 mm year?1. It is concluded that StaMPS-based PS-InSAR method performs better in terms of extracting more number of measurement pixels and in the estimation of mean line of sight (LOS) velocity as compared to SBAS method.  相似文献   
38.
In this paper, we focus on information extraction from optical character recognition (OCR) output. Since the content from OCR inherently has many errors, we present robust algorithms for information extraction from OCR lattices instead of merely looking them up in the top-choice (1-best) OCR output. Specifically, we address the challenge of named entity detection in noisy OCR output and show that searching for named entities in the recognition lattice significantly improves detection accuracy over 1-best search. While lattice-based named entity (NE) detection improves NE recall from OCR output, there are two problems with this approach: (1) the number of false alarms can be prohibitive for certain applications and (2) lattice-based search is computationally more expensive than 1-best NE lookup. To mitigate the above challenges, we present techniques for reducing false alarms using confidence measures and for reducing the amount of computation involved in performing the NE search. Furthermore, to demonstrate that our techniques are applicable across multiple domains and languages, we experiment with optical character recognition systems for videotext in English and scanned handwritten text in Arabic.  相似文献   
39.
In this paper, we present an accurate and general interconnect model for planar transmission line interconnects with arbitrary boundary conditions. Based on the unified approach, we develop a SPICE-compatible parameter extraction algorithm that can be used in high-performance computer-aided-design applications. A range of multilayered interconnect geometries with arbitrary boundaries are analyzed. Different typical configurations of ground placement are considered to verify the applicability of this method. For all such cases, results are compared for admittance, line parameters, and delay giving physical insight on the effect of boundary conditions on them. Compared with existing industry standard numerical field-solvers, like HFSS, the proposed model demonstrates more than 10× speedup within 2% accuracy.  相似文献   
40.
The complexity of constraints is a major obstacle for constraint-based software verification. Automatic constraint solvers are fundamentally incomplete: input constraints often build on some undecidable theory or some theory the solver does not support. This paper proposes and evaluates several randomized solvers to address this issue. We compared the effectiveness of a symbolic solver (CVC3), a random solver, two heuristic search solvers, and seven hybrid solvers (i.e. mix of random, symbolic, and heuristic solvers). We evaluated the solvers on a benchmark generated with a concolic execution of 9 subjects. The performance of each solver was measured by its precision, which is the fraction of constraints that the solver can find solution out of the total number of constraints that some solver can find solution. As expected, symbolic solving subsumes the other approaches for the 4 subjects that only generate decidable constraints. For the remaining 5 subjects, which contain undecidable constraints, the hybrid solvers achieved the highest precision (fraction of constraints that a solver can find a solution out of the total number of satisfiable constraints). We also observed that the solvers were complementary, which suggests that one should alternate their use in iterations of a concolic execution driver.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号