全文获取类型
收费全文 | 3253篇 |
免费 | 166篇 |
国内免费 | 2篇 |
专业分类
电工技术 | 23篇 |
综合类 | 4篇 |
化学工业 | 848篇 |
金属工艺 | 28篇 |
机械仪表 | 55篇 |
建筑科学 | 122篇 |
矿业工程 | 10篇 |
能源动力 | 129篇 |
轻工业 | 521篇 |
水利工程 | 35篇 |
石油天然气 | 8篇 |
无线电 | 275篇 |
一般工业技术 | 451篇 |
冶金工业 | 79篇 |
原子能技术 | 34篇 |
自动化技术 | 799篇 |
出版年
2024年 | 4篇 |
2023年 | 40篇 |
2022年 | 127篇 |
2021年 | 158篇 |
2020年 | 88篇 |
2019年 | 106篇 |
2018年 | 117篇 |
2017年 | 120篇 |
2016年 | 151篇 |
2015年 | 103篇 |
2014年 | 192篇 |
2013年 | 274篇 |
2012年 | 262篇 |
2011年 | 259篇 |
2010年 | 199篇 |
2009年 | 202篇 |
2008年 | 181篇 |
2007年 | 165篇 |
2006年 | 95篇 |
2005年 | 92篇 |
2004年 | 71篇 |
2003年 | 80篇 |
2002年 | 57篇 |
2001年 | 27篇 |
2000年 | 26篇 |
1999年 | 34篇 |
1998年 | 30篇 |
1997年 | 23篇 |
1996年 | 27篇 |
1995年 | 21篇 |
1994年 | 17篇 |
1993年 | 14篇 |
1992年 | 11篇 |
1991年 | 5篇 |
1990年 | 10篇 |
1989年 | 10篇 |
1988年 | 3篇 |
1987年 | 5篇 |
1986年 | 3篇 |
1985年 | 3篇 |
1983年 | 1篇 |
1982年 | 2篇 |
1981年 | 2篇 |
1978年 | 4篇 |
排序方式: 共有3421条查询结果,搜索用时 15 毫秒
51.
A Parametric Texture Model Based on Joint Statistics of Complex Wavelet Coefficients 总被引:27,自引:0,他引:27
We present a universal statistical model for texture images in the context of an overcomplete complex wavelet transform. The model is parameterized by a set of statistics computed on pairs of coefficients corresponding to basis functions at adjacent spatial locations, orientations, and scales. We develop an efficient algorithm for synthesizing random images subject to these constraints, by iteratively projecting onto the set of images satisfying each constraint, and we use this to test the perceptual validity of the model. In particular, we demonstrate the necessity of subgroups of the parameter set by showing examples of texture synthesis that fail when those parameters are removed from the set. We also demonstrate the power of our model by successfully synthesizing examples drawn from a diverse collection of artificial and natural textures. 相似文献
52.
Termeer M Oliván Bescós J Breeuwer M Vilanova A Gerritsen F Gröller ME Nagel E 《IEEE transactions on visualization and computer graphics》2008,14(6):1595-1602
Visually assessing the effect of the coronary artery anatomy on the perfusion of the heart muscle in patients with coronary artery disease remains a challenging task. We explore the feasibility of visualizing this effect on perfusion using a numerical approach. We perform a computational simulation of the way blood is perfused throughout the myocardium purely based on information from a three-dimensional anatomical tomographic scan. The results are subsequently visualized using both three-dimensional visualizations and bull's eye plots, partially inspired by approaches currently common in medical practice. Our approach results in a comprehensive visualization of the coronary anatomy that compares well to visualizations commonly used for other scanning technologies. We demonstrate techniques giving detailed insight in blood supply, coronary territories and feeding coronary arteries of a selected region. We demonstrate the advantages of our approach through visualizations that show information which commonly cannot be directly observed in scanning data, such as a separate visualization of the supply from each coronary artery. We thus show that the results of a computational simulation can be effectively visualized and facilitate visually correlating these results to for example perfusion data. 相似文献
53.
Javier Díaz Eduardo Ros Rodrigo Agís Jose Luis Bernier 《Computer Vision and Image Understanding》2008,112(3):262-273
Optical-flow computation is a well-known technique and there are important fields in which the application of this visual modality commands high interest. Nevertheless, most real-world applications require real-time processing, an issue which has only recently been addressed. Most real-time systems described to date use basic models which limit their applicability to generic tasks, especially when fast motion is presented or when subpixel motion resolution is required. Therefore, instead of implementing a complex optical-flow approach, we describe here a very high-frame-rate optical-flow processing system. Recent advances in image sensor technology make it possible nowadays to use high-frame-rate sensors to properly sample fast motion (i.e. as a low-motion scene), which makes a gradient-based approach one of the best options in terms of accuracy and consumption of resources for any real-time implementation. Taking advantage of the regular data flow of this kind of algorithm, our approach implements a novel superpipelined, fully parallelized architecture for optical-flow processing. The system is fully working and is organized into more than 70 pipeline stages, which achieve a data throughput of one pixel per clock cycle. This computing scheme is well suited to FPGA technology and VLSI implementation. The developed customized DSP architecture is capable of processing up to 170 frames per second at a resolution of 800 × 600 pixels. We discuss the advantages of high-frame-rate processing and justify the optical-flow model chosen for the implementation. We analyze this architecture, measure the system resource requirements using FPGA devices and finally evaluate the system’s performance and compare it with other approaches described in the literature. 相似文献
54.
Agustin Ramirez-Agundis Rafael Gadea-Girones Ricardo Colom-Palero Javier Diaz-Carmona 《Journal of Real-Time Image Processing》2007,2(4):271-280
This paper presents a scheme and its Field Programmable Gate Array (FPGA) implementation for a system based on combining the
bi-dimensional discrete wavelet transformation (2D-DWT) and vector quantization (VQ) for image compression. The 2D-DWT works
in a non-separable fashion using a parallel filter structure with distributed control to compute two resolution levels. The
wavelet coefficients of the higher frequency sub-bands are vector quantized using multi-resolution codebook and those of the
lower frequency sub-band at level two are scalar quantized and entropy encoded. VQ is carried out by self organizing feature
map (SOFM) neural nets working at the recall phase. Codebooks are quickly generated off-line using the same nets functioning
at the training phase. The complete system, including the 2D-DWT, the multi-resolution codebook VQ, and the statistical encoder,
was implemented on a Xilinx Virtex 4 FPGA and is capable of performing real-time compression for digital video when dealing
with grayscale 512 × 512 pixels images. It offers high compression quality (PSNR values around 35 dB) and acceptable compression
rate values (0.62 bpp).
相似文献
Javier Diaz-CarmonaEmail: |
55.
Detecting and tracking human faces in video sequences is useful in a number of applications such as gesture recognition and
human-machine interaction. In this paper, we show that online appearance models (holistic approaches) can be used for simultaneously
tracking the head, the lips, the eyebrows, and the eyelids in monocular video sequences. Unlike previous approaches to eyelid
tracking, we show that the online appearance models can be used for this purpose. Neither color information nor intensity
edges are used by our proposed approach. More precisely, we show how the classical appearance-based trackers can be upgraded
in order to deal with fast eyelid movements. The proposed eyelid tracking is made robust by avoiding eye feature extraction.
Experiments on real videos show the usefulness of the proposed tracking schemes as well as their enhancement to our previous
approach.
相似文献
Javier OrozcoEmail: |
56.
Javier Murillo Beatriz López Víctor Muñoz Dídac Busquets 《Computational Intelligence》2012,28(1):24-50
Auctions have been used to deal with resource allocation in multiagent environments, especially in service‐oriented electronic markets. In this type of market, resources are perishable and auctions are repeated over time with the same or a very similar set of agents. In this scenario it is advisable to use recurrent auctions: a sequence of auctions of any kind where the result of one auction may influence the following one. Some problems do appear in these situations, as for instance, the bidder drop problem, the asymmetric balance of negotiation power or resource waste, which could cause the market to collapse. Fair mechanisms can be useful to minimize the effects of these problems. With this aim, we have analyzed four previous fair mechanisms under dynamic scenarios and we have proposed a new one that takes into account changes in the supply as well as the presence of alternative marketplaces. We experimentally show how the new mechanism presents a higher average performance under all simulated conditions, resulting in a higher profit for the auctioneer than with the previous ones, and in most cases avoiding the waste of resources. 相似文献
57.
Darío MaravallAuthor Vitae Javier de LopeAuthor Vitae Raúl DomínguezAuthor Vitae 《Neurocomputing》2012,75(1):106-114
In multi-agent systems, the study of language and communication is an active field of research. In this paper we present the application of evolutionary strategies to the self-emergence of a common lexicon in a population of agents. By modeling the vocabulary or lexicon of each agent as an association matrix or look-up table that maps the meanings (i.e. the objects encountered by the agents or the states of the environment itself) into symbols or signals we check whether it is possible for the population to converge in an autonomous, decentralized way to a common lexicon, so that the communication efficiency of the entire population is optimal. We have conducted several experiments aimed at testing whether it is possible to converge with evolutionary strategies to an optimal Saussurean communication system. We have organized our experiments alongside two main lines: first, we have investigated the effect of the population size on the convergence results. Second, and foremost, we have also investigated the effect of the lexicon size on the convergence results. To analyze the convergence of the population of agents we have defined the population's consensus when all the agents (i.e. 100% of the population) share the same association matrix or lexicon. As a general conclusion we have shown that evolutionary strategies are powerful enough optimizers to guarantee the convergence to lexicon consensus in a population of autonomous agents. 相似文献
58.
Javier Galbally Julian Fierrez Javier Ortega-Garcia Réjean Plamondon 《Pattern recognition》2012,45(7):2622-2632
A novel method for the generation of synthetic on-line signatures based on the spectral analysis and the Kinematic Theory of rapid human movements, was presented in Part I of this series of two papers. In the present paper, the experimental framework used for the validation of the novel approach is described. The validation protocol, which uses different development and test sets in order to achieve unbiased results, includes the comparison of real and synthetic databases in terms of (i) visual appearance, (ii) statistical information, and (iii) performance evaluation of three competitive and totally different verification systems. The experimental results show the high similarity existing between synthetically generated and humanly produced samples, and the great potential of the method for the study of the signature trait. 相似文献
59.
TangiWheel: A Widget for Manipulating Collections on Tabletop Displays Supporting Hybrid Input Modality 下载免费PDF全文
In this paper we present TangiWheel,a collection manipulation widget for tabletop displays.Our implementation is flexible,allowing either multi-touch or interaction,or even a hybrid scheme to better suit user choice and convenience.Different TangiWheel aspects and features are compared with other existing widgets for collection manipulation.The study reveals that TangiWheel is the first proposal to support a hybrid input modality with large resemblance levels between touch and tangible interaction styles.Several experiments were conducted to evaluate the techniques used in each input scheme for a better understanding of tangible surface interfaces in complex tasks performed by a single user (e.g.,involving a typical master-slave exploration pattern).The results show that tangibles perform significantly better than fingers,despite dealing with a greater number of interactions,in situations that require a large number of acquisitions and basic manipulation tasks such as establishing location and orientation.However,when users have to perform multiple exploration and selection operations that do not require previous basic manipulation tasks,for instance when collections are fixed in the interface layout,touch input is significantly better in terms of required time and number of actions.Finally,when a more elastic collection layout or more complex additional insertion or displacement operations are needed,the hybrid and tangible approaches clearly outperform finger-based interactions. 相似文献
60.
Verónica Venturini Javier Carbo José Manuel Molina 《Expert systems with applications》2012,39(12):10656-10673
Researches on Ambient Intelligent and Ubiquitous Computing using wireless technologies have increased in the last years. In this work, we review several scenarios to define a multi-agent architecture that support the information needs of these new technologies, for heterogeneous domain. Our contribution consists of designing in a methodological way a Context Aware System (involving location services) using agents that can be used in very different domains. We describe all the steps followed in the design of the agent system. We apply a hybridizing methodology between GAIA and AUML. Additionally we propose a way to compare different agent architectures for Context Aware System using agent interactions. So, in this paper, we describe the assignment of weight values to agents interaction in two different MAS architectures for Context Aware problems solving different scenarios inspired in FIPA standard negotiation protocols. 相似文献