全文获取类型
收费全文 | 367篇 |
免费 | 466篇 |
国内免费 | 7篇 |
专业分类
电工技术 | 2篇 |
综合类 | 6篇 |
化学工业 | 55篇 |
金属工艺 | 2篇 |
机械仪表 | 3篇 |
建筑科学 | 3篇 |
轻工业 | 2篇 |
武器工业 | 3篇 |
无线电 | 34篇 |
一般工业技术 | 24篇 |
自动化技术 | 706篇 |
出版年
2024年 | 4篇 |
2023年 | 7篇 |
2022年 | 6篇 |
2021年 | 5篇 |
2020年 | 5篇 |
2019年 | 14篇 |
2018年 | 48篇 |
2017年 | 127篇 |
2016年 | 173篇 |
2015年 | 150篇 |
2014年 | 156篇 |
2013年 | 12篇 |
2012年 | 8篇 |
2011年 | 15篇 |
2010年 | 12篇 |
2009年 | 13篇 |
2008年 | 10篇 |
2007年 | 9篇 |
2006年 | 14篇 |
2005年 | 9篇 |
2004年 | 8篇 |
2003年 | 4篇 |
2002年 | 6篇 |
2001年 | 3篇 |
2000年 | 3篇 |
1999年 | 6篇 |
1998年 | 2篇 |
1997年 | 3篇 |
1996年 | 3篇 |
1995年 | 4篇 |
1994年 | 1篇 |
排序方式: 共有840条查询结果,搜索用时 0 毫秒
81.
Crowded motions refer to multiple objects moving around and interacting such as crowds, pedestrians and etc. We capture crowded scenes using a depth scanner at video frame rates. Thus, our input is a set of depth frames which sample the scene over time. Processing such data is challenging as it is highly unorganized, with large spatio‐temporal holes due to many occlusions. As no correspondence is given, locally tracking 3D points across frames is hard due to noise and missing regions. Furthermore global segmentation and motion completion in presence of large occlusions is ambiguous and hard to predict. Our algorithm utilizes Gestalt principles of common fate and good continuity to compute motion tracking and completion respectively. Our technique does not assume any pre‐given markers or motion template priors. Our key‐idea is to reduce the motion completion problem to a 1D curve fitting and matching problem which can be solved efficiently using a global optimization scheme. We demonstrate our segmentation and completion method on a variety of synthetic and real world crowded scanned scenes. 相似文献
82.
We propose a fast method for 3D shape segmentation and labeling via Extreme Learning Machine (ELM). Given a set of example shapes with labeled segmentation, we train an ELM classifier and use it to produce initial segmentation for test shapes. Based on the initial segmentation, we compute the final smooth segmentation through a graph‐cut optimization constrained by the super‐face boundaries obtained by over‐segmentation and the active contours computed from ELM segmentation. Experimental results show that our method achieves comparable results against the state‐of‐the‐arts, but reduces the training time by approximately two orders of magnitude, both for face‐level and super‐face‐level, making it scale well for large datasets. Based on such notable improvement, we demonstrate the application of our method for fast online sequential learning for 3D shape segmentation at face level, as well as realtime sequential learning at super‐face level. 相似文献
83.
This paper presents a texture synthesis algorithm that was designed for the tile-less generation of large images of arbitrary size from small sample images. The synthesised texture shows features that are visually similar to the sample over a wide frequency range. The development of the algorithm aimed at achieving high quality results for a large range of natural textures, incorporation of the original samples in the synthesis product, ease of use and good texturing speed even with input sample data two magnitudes larger than used by previous techniques. Like other algorithms we utilise an implicit texture model by copying arbitrary shaped texture patches from the sample to the destination over a multi-scale image pyramid. Our method combines the advantages of different previous techniques with respect to quality. A mixture of exhaustive searching, massive parallel computing and the well-known LBG-algorithm ensures a good balance between texturing quality and speed. 相似文献
84.
85.
Mappings between color spaces are ubiquitous in image processing problems such as gamut mapping, decolorization, and image optimization for color‐blind people. Simple color transformations often result in information loss and ambiguities, and one wishes to find an image‐specific transformation that would preserve as much as possible the structure of the original image in the target color space. In this paper, we propose Laplacian colormaps, a generic framework for structure‐preserving color transformations between images. We use the image Laplacian to capture the structural information, and show that if the color transformation between two images preserves the structure, the respective Laplacians have similar eigenvectors, or in other words, are approximately jointly diagonalizable. Employing the relation between joint diagonalizability and commutativity of matrices, we use Laplacians commutativity as a criterion of color mapping quality and minimize it w.r.t. the parameters of a color transformation to achieve optimal structure preservation. We show numerous applications of our approach, including color‐to‐gray conversion, gamut mapping, multispectral image fusion, and image optimization for color deficient viewers. 相似文献
86.
We present a novel method for enhancing details in a digital photograph, inspired by the principle of art photography. In contrast to the previous methods that primarily rely on tone scaling, our technique provides a flexible tone transform model that consists of two operators: shifting and scaling. This model permits shifting of the tonal range in each image region to enable significant detail boosting regardless of the original tone. We optimize these shift and scale factors in our constrained optimization framework to achieve extreme detail enhancement across the image in a piecewise smooth fashion, as in art photography. The experimental results show that the proposed method brings out a significantly large amount of details even from an ordinary low‐dynamic range image. 相似文献
87.
In this paper we present a novel approach to simulate image formation for a wide range of real world lenses in the Monte Carlo ray tracing framework. Our approach sidesteps the overhead of tracing rays through a system of lenses and requires no tabulation. To this end we first improve the precision of polynomial optics to closely match ground‐truth ray tracing. Second, we show how the Jacobian of the optical system enables efficient importance sampling, which is crucial for difficult paths such as sampling the aperture which is hidden behind lenses on both sides. Our results show that this yields converged images significantly faster than previous methods and accurately renders complex lens systems with negligible overhead compared to simple models, e.g. the thin lens model. We demonstrate the practicality of our method by incorporating it into a bidirectional path tracing framework and show how it can provide information needed for sophisticated light transport algorithms. 相似文献
88.
In the literature on optimal regular volume sampling, the Body‐Centered Cubic (BCC) lattice has been proven to be optimal for sampling spherically band‐limited signals above the Nyquist limit. On the other hand, if the sampling frequency is below the Nyquist limit, the Face‐Centered Cubic (FCC) lattice was demonstrated to be optimal in reducing the prealiasing effect. In this paper, we confirm that the FCC lattice is indeed optimal in this sense in a certain interval of the sampling frequency. By theoretically estimating the prealiasing error in a realistic range of the sampling frequency, we show that in other frequency intervals, the BCC lattice and even the traditional Cartesian Cubic (CC) lattice are expected to minimize the prealiasing. The BCC lattice is superior over the FCC lattice if the sampling frequency is not significantly below the Nyquist limit. Interestingly, if the original signal is drastically undersampled, the CC lattice is expected to provide the lowest prealiasing error. Additionally, we give a comprehensible clarification that the sampling efficiency of the FCC lattice is lower than that of the BCC lattice. Although this is a well‐known fact, the exact percentage has been erroneously reported in the literature. Furthermore, for the sake of an unbiased comparison, we propose to rotate the Marschner‐Lobb test signal such that an undue advantage is not given to either lattice. 相似文献
89.
D. Oelke H. Strobelt C. Rohrdantz I. Gurevych O. Deussen 《Computer Graphics Forum》2014,33(3):201-210
We present an analysis and visualization method for computing what distinguishes a given document collection from others. We determine topics that discriminate a subset of collections from the remaining ones by applying probabilistic topic modeling and subsequently approximating the two relevant criteria distinctiveness and characteristicness algorithmically through a set of heuristics. Furthermore, we suggest a novel visualization method called DiTop‐View, in which topics are represented by glyphs (topic coins) that are arranged on a 2D plane. Topic coins are designed to encode all information necessary for performing comparative analyses such as the class membership of a topic, its most probable terms and the discriminative relations. We evaluate our topic analysis using statistical measures and a small user experiment and present an expert case study with researchers from political sciences analyzing two real‐world datasets. 相似文献
90.
Usually visualization is applied to gain insight into data. Yet consuming the data in form of visual representation is not always enough. Instead, users need to edit the data, preferably through the same means used to visualize them. In this work, we present a semi‐automatic approach to visual editing of graphs. The key idea is to use an interactive EditLens that defines where an edit operation affects an already customized and established graph layout. Locally optimal node positions within the lens and edge routes to connected nodes are calculated according to different criteria. This spares the user much manual work, but still provides sufficient freedom to accommodate application‐dependent layout constraints. Our approach utilizes the advantages of multi‐touch gestures, and is also compatible with classic mouse and keyboard interaction. Preliminary user tests have been conducted with researchers from bio‐informatics who need to manually maintain a slowly, but constantly growing molecular network. As the user feedback indicates, our solution significantly improves the editing procedure applied so far. 相似文献