全文获取类型
收费全文 | 1217篇 |
免费 | 72篇 |
国内免费 | 2篇 |
专业分类
电工技术 | 9篇 |
综合类 | 1篇 |
化学工业 | 316篇 |
金属工艺 | 17篇 |
机械仪表 | 34篇 |
建筑科学 | 23篇 |
矿业工程 | 2篇 |
能源动力 | 47篇 |
轻工业 | 284篇 |
水利工程 | 9篇 |
石油天然气 | 13篇 |
无线电 | 87篇 |
一般工业技术 | 169篇 |
冶金工业 | 35篇 |
原子能技术 | 13篇 |
自动化技术 | 232篇 |
出版年
2024年 | 13篇 |
2023年 | 17篇 |
2022年 | 53篇 |
2021年 | 66篇 |
2020年 | 50篇 |
2019年 | 59篇 |
2018年 | 56篇 |
2017年 | 50篇 |
2016年 | 57篇 |
2015年 | 35篇 |
2014年 | 69篇 |
2013年 | 91篇 |
2012年 | 94篇 |
2011年 | 116篇 |
2010年 | 81篇 |
2009年 | 61篇 |
2008年 | 57篇 |
2007年 | 48篇 |
2006年 | 32篇 |
2005年 | 38篇 |
2004年 | 20篇 |
2003年 | 18篇 |
2002年 | 20篇 |
2001年 | 11篇 |
2000年 | 6篇 |
1999年 | 10篇 |
1998年 | 7篇 |
1997年 | 7篇 |
1996年 | 5篇 |
1995年 | 6篇 |
1994年 | 5篇 |
1993年 | 5篇 |
1992年 | 3篇 |
1991年 | 1篇 |
1990年 | 4篇 |
1989年 | 1篇 |
1988年 | 2篇 |
1987年 | 1篇 |
1985年 | 2篇 |
1983年 | 4篇 |
1982年 | 1篇 |
1981年 | 2篇 |
1979年 | 3篇 |
1977年 | 1篇 |
1975年 | 1篇 |
1973年 | 1篇 |
1969年 | 1篇 |
排序方式: 共有1291条查询结果,搜索用时 15 毫秒
11.
An information fidelity criterion for image quality assessment using natural scene statistics. 总被引:11,自引:0,他引:11
Hamid Rahim Sheikh Alan Conrad Bovik Gustavo de Veciana 《IEEE transactions on image processing》2005,14(12):2117-2128
Measurement of visual quality is of fundamental importance to numerous image and video processing applications. The goal of quality assessment (QA) research is to design algorithms that can automatically assess the quality of images or videos in a perceptually consistent manner. Traditionally, image QA algorithms interpret image quality as fidelity or similarity with a "reference" or "perfecft" image in some perceptual space. Such "full-referenc" QA methods attempt to achieve consistency in quality prediction by modeling salient physiological and psychovisual features of the human visual system (HVS), or by arbitrary signal fidelity criteria. In this paper, we approach the problem of image QA by proposing a novel information fidelity criterion that is based on natural scene statistics. QA systems are invariably involved with judging the visual quality of images and videos that are meant for "human consumption." Researchers have developed sophisticated models to capture the statistics of natural signals, that is, pictures and videos of the visual environment. Using these statistical models in an information-theoretic setting, we derive a novel QA algorithm that provides clear advantages over the traditional approaches. In particular, it is parameterless and outperforms current methods in our testing. We validate the performance of our algorithm with an extensive subjective study involving 779 images. We also show that, although our approach distinctly departs from traditional HVS-based methods, it is functionally similar to them under certain conditions, yet it outperforms them due to improved modeling. The code and the data from the subjective study are available at. 相似文献
12.
Gustavo?B.?Figueiredo Cesar?A.?V.?Melo Nelson?L.?S.?da?FonsecaEmail author 《Photonic Network Communications》2016,32(1):9-27
Ingress nodes in optical burst switching (OBS) networks are responsible for assembling burst from incoming packets and forwarding these bursts into the OBS network core. Changes in the statistical characteristics of a traffic stream at an ingress switch can affect the capacity of the network to provide quality of service. Therefore, the statistical characteristics of the output flow of an ingress node must be known for appropriate network dimensioning. This paper evaluates the impact of burst assembly mechanisms on the scaling properties of multifractal traffic flows. Results show that the factor most relevant in determining the nature of the output traffic flow is the relationship between the cut-off time scale of the input traffic and the time scale of assembly threshold. Moreover, a procedure for the detection of the cut-off scale of incoming traffic is introduced. 相似文献
13.
Gracieli Posser Guilherme Flach Gustavo Wilke Ricardo Reis 《Analog Integrated Circuits and Signal Processing》2012,73(3):831-840
A two-step transistor sizing optimization method based on geometric programming for delay/area minimization is presented. In the first step, Elmore delay is minimized using only minimum and maximum transistor size constraints. In the second step, the minimized delay found in the previous step is used as a constraint for area minimization. In this way, our method can target simultaneously both delay and area reduction. Moreover, by relaxing the minimized delay, one may further reduce area with small delay penalty. Gate sizing may be accomplished through transistor sizing tying each transistor inside a cell to a same scale factor. This reduces the solution space, but also improves runtime as less variables are necessary. To analyze this tradeoff between execution time and solution quality a comparison between gate sizing and transistor sizing is presented. In order to qualify our approach, the ISCAS??85 benchmark circuits are mapped to a 45?nm technology using a typical standard cell library. Gate sizing and transistor sizing are performed considering delay minimization. Gate sizing is able to reduce delay in 21?%, on average, for the same area and power values of the sizing provided by standard-cells library. Then, the transistor sizing is executed and delay can be reduced in 40.4?% and power consumption in 2.9?%, on average, compared to gate sizing. However, the transistor sizing takes about 23 times longer to be computed, on average, using a number of variables twice higher than gate sizing. Gate sizing optimizing area is executed considering a delay constraint. Three delay constraints are considered, the minimum delay given by delay optimization and delay 1 and 5?% higher than minimum delay. An energy/delay gain (EDG) metric is used to quantify the most efficient tradeoff. Considering the minimum delay, area (power) is reduced in 28.2?%, on average. Relaxing delay by just 1?%, area (power) is reduced in 41.7?% and the EDG metric is 41.7. Area can be reduced in 51?%, on average, relaxing delay by 5?% and EDG metric is 10.2. 相似文献
14.
Ricardo Schumacher Eduardo G. Lima Gustavo H. C. Oliveira 《Circuits, Systems, and Signal Processing》2016,35(7):2298-2316
In this paper, a Takenaka–Malmquist–Volterra (TMV) model structure is employed to improve the approximations in the low-pass equivalent behavioral modeling of radio frequency (RF) power amplifiers (PAs). The Takenaka–Malmquist basis generalizes the orthonormal basis functions previously used in this context. In addition, it allows each nonlinearity order in the expanded Volterra model to be parameterized by multiple complex poles (dynamics). The state-space realizations for the TMV models are introduced. The pole sets for the TMV model and also for the previous Laguerre–Volterra (LV) and Kautz–Volterra (KV) models are obtained using a constrained nonlinear optimization approach. Based on experimental data measured on a GaN HEMT class AB RF PA excited by a WCDMA signal, it is observed that the TMV model reduces the normalized mean-square error and the adjacent channel error power ratio for the upper adjacent channel (upper ACEPR) by 1.6 dB when it is compared to the previous LV and KV models under the same computational complexity. 相似文献
15.
Devis Tuia Jordi Muñoz-Marí Mikhail Kanevski Gustavo Camps-Valls 《Journal of Signal Processing Systems》2011,65(3):301-310
Traditional kernel classifiers assume independence among the classification outputs. As a consequence, each misclassification receives the same weight in the loss function. Moreover, the kernel function only takes into account the similarity between input values and ignores possible relationships between the classes to be predicted. These assumptions are not consistent for most of real-life problems. In the particular case of remote sensing data, this is not a good assumption either. Segmentation of images acquired by airborne or satellite sensors is a very active field of research in which one tries to classify a pixel into a predefined set of classes of interest (e.g. water, grass, trees, etc.). In this situation, the classes share strong relationships, e.g. a tree is naturally (and spectrally) more similar to grass than to water. In this paper, we propose a first approach to remote sensing image classification using structured output learning. In our approach, the output space structure is encoded using a hierarchical tree, and these relations are added to the model in both the kernel and the loss function. The methodology gives rise to a set of new tools for structured classification, and generalizes the traditional non-structured classification methods. Comparison to standard SVM is done numerically, statistically and by visual inspection of the obtained classification maps. Good results are obtained in the challenging case of a multispectral image of very high spatial resolution acquired with QuickBird over a urban area. 相似文献
16.
Hernandez Gustavo Castillo Yasseri Mohammad Ayachi Sahar de Boor Johannes Müller Eckhard 《Semiconductors》2019,53(13):1831-1837
Semiconductors - Thermoelectric material development typically aims at maximizing produced electrical power and efficiency of energy conversion, even though sometimes, this means adding expensive... 相似文献
17.
We present a new supervised learning model designed for the automatic segmentation of the left ventricle (LV) of the heart in ultrasound images. We address the following problems inherent to supervised learning models: 1) the need of a large set of training images; 2) robustness to imaging conditions not present in the training data; and 3) complex search process. The innovations of our approach reside in a formulation that decouples the rigid and nonrigid detections, deep learning methods that model the appearance of the LV, and efficient derivative-based search algorithms. The functionality of our approach is evaluated using a data set of diseased cases containing 400 annotated images (from 12 sequences) and another data set of normal cases comprising 80 annotated images (from two sequences), where both sets present long axis views of the LV. Using several error measures to compute the degree of similarity between the manual and automatic segmentations, we show that our method not only has high sensitivity and specificity but also presents variations with respect to a gold standard (computed from the manual annotations of two experts) within interuser variability on a subset of the diseased cases. We also compare the segmentations produced by our approach and by two state-of-the-art LV segmentation models on the data set of normal cases, and the results show that our approach produces segmentations that are comparable to these two approaches using only 20 training images and increasing the training set to 400 images causes our approach to be generally more accurate. Finally, we show that efficient search methods reduce up to tenfold the complexity of the method while still producing competitive segmentations. In the future, we plan to include a dynamical model to improve the performance of the algorithm, to use semisupervised learning methods to reduce even more the dependence on rich and large training sets, and to design a shape model less dependent on the training set. 相似文献
18.
Algorithm transformation methods to reduce the overhead of software-based fault tolerance techniques
José Rodrigo Azambuja Gustavo Brown Fernanda Lima Kastensmidt Luigi Carro 《Microelectronics Reliability》2014
This paper introduces a framework that tackles the costs in area and energy consumed by methodologies like spatial or temporal redundancy with a different approach: given an algorithm, we find a transformation in which part of the computation involved is transformed into memory accesses. The precomputed data stored in memory can be protected then by applying traditional and well established ECC algorithms to provide fault tolerant hardware designs. At the same time, the transformation increases the performance of the system by reducing its execution time, which is then used by customized software-based fault tolerant techniques to protect the system without any degradation when compared to its original form. Application of this technique to key algorithms in a MP3 player, combined with a fault injection campaign, show that this approach increases fault tolerance up to 92%, without any performance degradation. 相似文献
19.
Developing Smart Grids Based on GPRS and ZigBee Technologies Using Queueing Modeling–Based Optimization Algorithm 下载免费PDF全文
Gustavo Batista de Castro Souza Flávio Henrique Teles Vieira Cláudio Ribeiro Lima Getúlio Antero de Júnior Deus Marcelo Stehling de Castro Sérgio Granato de Araujo Thiago Lara Vasques 《ETRI Journal》2016,38(1):41-51
Smart metering systems have become widespread around the world. RF mesh communication systems have contributed to the creation of smarter and more reliable power systems. This paper presents an algorithm for positioning GPRS concentrators to attain delay constraints for a ZigBee‐based mesh network. The proposed algorithm determines the number and placement of concentrators using integer linear programming and a queueing model for the given mesh network. The solutions given by the proposed algorithm are validated by verifying the communication network performance through simulations. 相似文献
20.