全文获取类型
收费全文 | 1278篇 |
免费 | 451篇 |
国内免费 | 14篇 |
专业分类
电工技术 | 99篇 |
综合类 | 28篇 |
化学工业 | 59篇 |
金属工艺 | 3篇 |
机械仪表 | 18篇 |
建筑科学 | 7篇 |
矿业工程 | 12篇 |
能源动力 | 116篇 |
轻工业 | 3篇 |
水利工程 | 5篇 |
石油天然气 | 8篇 |
武器工业 | 2篇 |
无线电 | 55篇 |
一般工业技术 | 13篇 |
冶金工业 | 7篇 |
原子能技术 | 1篇 |
自动化技术 | 1307篇 |
出版年
2025年 | 2篇 |
2024年 | 9篇 |
2023年 | 104篇 |
2022年 | 19篇 |
2021年 | 9篇 |
2020年 | 192篇 |
2019年 | 190篇 |
2018年 | 145篇 |
2017年 | 153篇 |
2016年 | 182篇 |
2015年 | 158篇 |
2014年 | 187篇 |
2013年 | 24篇 |
2012年 | 27篇 |
2011年 | 58篇 |
2010年 | 55篇 |
2009年 | 48篇 |
2008年 | 19篇 |
2007年 | 29篇 |
2006年 | 23篇 |
2005年 | 35篇 |
2004年 | 14篇 |
2003年 | 18篇 |
2002年 | 12篇 |
2001年 | 4篇 |
2000年 | 10篇 |
1999年 | 4篇 |
1998年 | 1篇 |
1996年 | 4篇 |
1995年 | 1篇 |
1994年 | 1篇 |
1993年 | 2篇 |
1991年 | 1篇 |
1989年 | 2篇 |
1988年 | 1篇 |
排序方式: 共有1743条查询结果,搜索用时 15 毫秒
61.
We propose a personality trait exaggeration system emphasizing the impression of human face in images, based on multi‐level features learning and exaggeration. These features are called Personality Trait Model (PTM). Abstract level of PTM is social psychology trait of face perception such as amiable, mean, cute and so on. Concrete level of PTM is shape feature and texture feature. A training phase is presented to learn multi‐level features of faces from different images. Statistical survey is taken to label sample images with people's first impressions. From images with the same labels, we capture not only shape features but also texture features to enhance exaggeration effect. Texture feature is expressed by matrix to reflect depth of facial organs, wrinkles and so on. In application phase, original images will be exaggerated using PTM iteratively. And exaggeration rate for each iteration is constrained to keep likeness with the original face. Experimental results demonstrate that our system can emphasize chosen social psychology traits effectively. 相似文献
62.
Collision‐Aware and Online Compression of Rigid Body Simulations via Integrated Error Minimization
下载免费PDF全文

Methods to compress simulation data are invaluable as they facilitate efficient transmission along the visual effects pipeline, fast and efficient replay of simulations for visualization and enable storage of scientific data. However, all current approaches to compressing simulation data require access to the entire dynamic simulation, leading to large memory requirements and additional computational burden. In this paper we perform compression of contact‐dominated, rigid body simulations in an online, error‐bounded fashion. This has the advantage of requiring access to only a narrow window of simulation data at a time while still achieving good agreement with the original simulation. Our approach is simulator agnostic allowing us to compress data from a variety of sources. We demonstrate the efficacy of our algorithm by compressing contact‐dominated rigid body simulations from a number of sources, achieving compression rates of up to 360 times over raw data size. 相似文献
63.
Approximate Program Smoothing Using Mean‐Variance Statistics,with Application to Procedural Shader Bandlimiting
下载免费PDF全文

We introduce a general method to approximate the convolution of a program with a Gaussian kernel. This results in the program being smoothed. Our compiler framework models intermediate values in the program as random variables, by using mean and variance statistics. We decompose the input program into atomic parts and relate the statistics of the different parts of the smoothed program. We give several approximate smoothing rules that can be used for the parts of the program. These include an improved variant of Dorn et al. [ DBLW15 ], a novel adaptive Gaussian approximation, Monte Carlo sampling, and compactly supported kernels. Our adaptive Gaussian approximation handles multivariate Gaussian distributed inputs, gives exact results for a larger class of programs than previous work, and is accurate to the second order in the standard deviation of the kernel for programs with certain analytic properties. Because each expression in the program can have multiple approximation choices, we use a genetic search to automatically select the best approximations. We apply this framework to the problem of automatically bandlimiting procedural shader programs. We evaluate our method on a variety of geometries and complex shaders, including shaders with parallax mapping, animation, and spatially varying statistics. The resulting smoothed shader programs outperform previous approaches both numerically and aesthetically. 相似文献
64.
High‐dimensional data sets are a prevalent occurrence in many application domains. This data is commonly visualized using dimensionality reduction (DR) methods. DR methods provide e.g. a two‐dimensional embedding of the abstract data that retains relevant high‐dimensional characteristics such as local distances between data points. Since the amount of DR algorithms from which users may choose is steadily increasing, assessing their quality becomes more and more important. We present a novel technique to quantify and compare the quality of DR algorithms that is based on persistent homology. An inherent beneficial property of persistent homology is its robustness against noise which makes it well suited for real world data. Our pipeline informs about the best DR technique for a given data set and chosen metric (e.g. preservation of local distances) and provides knowledge about the local quality of an embedding, thereby helping users understand the shortcomings of the selected DR method. The utility of our method is demonstrated using application data from multiple domains and a variety of commonly used DR methods. 相似文献
65.
This paper presents a variational algorithm for feature‐preserved mesh denoising. At the heart of the algorithm is a novel variational model composed of three components: fidelity, regularization and fairness, which are specifically designed to have their intuitive roles. In particular, the fidelity is formulated as an L1 data term, which makes the regularization process be less dependent on the exact value of outliers and noise. The regularization is formulated as the total absolute edge‐lengthed supplementary angle of the dihedral angle, making the model capable of reconstructing meshes with sharp features. In addition, an augmented Lagrange method is provided to efficiently solve the proposed variational model. Compared to the prior art, the new algorithm has crucial advantages in handling large scale noise, noise along random directions, and different kinds of noise, including random impulsive noise, even in the presence of sharp features. Both visual and quantitative evaluation demonstrates the superiority of the new algorithm. 相似文献
66.
Linear Discriminative Star Coordinates for Exploring Class and Cluster Separation of High Dimensional Data
下载免费PDF全文

Yunhai Wang Jingting Li Feiping Nie Holger Theisel Minglun Gong Dirk J. Lehmann 《Computer Graphics Forum》2017,36(3):401-410
One main task for domain experts in analysing their nD data is to detect and interpret class/cluster separations and outliers. In fact, an important question is, which features/dimensions separate classes best or allow a cluster‐based data classification. Common approaches rely on projections from nD to 2D, which comes with some challenges, such as: The space of projection contains an infinite number of items. How to find the right one? The projection approaches suffers from distortions and misleading effects. How to rely to the projected class/cluster separation? The projections involve the complete set of dimensions/features. How to identify irrelevant dimensions? Thus, to address these challenges, we introduce a visual analytics concept for the feature selection based on linear discriminative star coordinates (DSC), which generate optimal cluster separating views in a linear sense for both labeled and unlabeled data. This way the user is able to explore how each dimension contributes to clustering. To support to explore relations between clusters and data dimensions, we provide a set of cluster‐aware interactions allowing to smartly iterate through subspaces of both records and features in a guided manner. We demonstrate our features selection approach for optimal cluster/class separation analysis with a couple of experiments on real‐life benchmark high‐dimensional data sets. 相似文献
67.
Given a set of rectangles embedded in the plane, we consider the problem of adjusting the layout to remove all overlap while preserving the orthogonal order of the rectangles. The objective is to minimize the displacement of the rectangles. We call this problem Minimum -Displacement Overlap Removal (mdor ). Our interest in this problem is motivated by the application of displaying metadata of archaeological sites. Because most existing overlap removal algorithms are not designed to minimize displacement while preserving orthogonal order, we present and compare several approaches which are tailored to our particular usecase. We introduce a new overlap removal heuristic which we call re Arrange . Although conceptually simple, it is very effective in removing the overlap while keeping the displacement small. Furthermore, we propose an additional procedure to repair the orthogonal order after every iteration, with which we extend both our new heuristic and PRISM, a widely used overlap removal algorithm. We compare the performance of both approaches with and without this order repair method. The experimental results indicate that re Arrange is very effective for heterogeneous input data where the overlap is concentrated in few dense regions. 相似文献
68.
Fused deposition modeling based 3D‐printing is becoming increasingly popular due to it's low‐cost and simple operation and maintenance. While it produces rugged prints made from a wide range of materials, it suffers from an inherent printing limitation where it cannot produce overhanging surfaces of non‐trivial size. This limitation can be handled by constructing temporary support‐structures, however this solution involves additional material costs, longer print time, and often a fair amount of labor in removing it. In this paper we present a new method for partitioning general solid objects into a small number of parts that can be printed with no support. The partitioning is computed by applying a sequence of cutting‐planes that split the object recursively. Unlike existing algorithms, the planes are not chosen at random, rather they are derived from shape analysis routines that identify and resolve various commonly‐found geometric configurations. In addition, we guide this search by a revised set of conditions that both ensure the objects' printability as well as realistically model the printing capabilities of the printer at hand. Evaluation of the new method demonstrates its ability to efficiently obtain support‐free partitionings typically containing fewer parts compared to existing methods that rely on support‐structures. 相似文献
69.
B. Gobbo D. Balsamo M. Mauri P. Bajardi A. Panisson P. Ciuccarelli 《Computer Graphics Forum》2019,38(3):609-621
In this paper we present Top Tom, a digital platform whose goal is to provide analytical and visual solutions for the exploration of a dynamic corpus of user‐generated messages and media articles, with the aim of i) distilling the information from thousands of documents in a low‐dimensional space of explainable topics, ii) cluster them in a hierarchical fashion while allowing to drill down to details and stories as constituents of the topics, iii) spotting trends and anomalies. Top Tom implements a batch processing pipeline able to run both in near‐real time with time stamped data from streaming sources and on historical data with a temporal dimension in a cold start mode. The resulting output unfolds along three main axes: time, volume and semantic similarity (i.e. topic hierarchical aggregation). To allow the browsing of data in a multiscale fashion and the identification of anomalous behaviors, three visual metaphors were adopted from biological and medical fields to design visualizations, i.e. the flowing of particles in a coherent stream, tomographic cross sectioning and contrast‐like analysis of biological tissues. The platform interface is composed by three main visualizations with coherent and smooth navigation interactions: calendar view, flow view, and temporal cut view. The integration of these three visual models with the multiscale analytic pipeline proposes a novel system for the identification and exploration of topics from unstructured texts. We evaluated the system using a collection of documents about the emerging opioid epidemics in the United States. 相似文献
70.
We present a user‐guided, semi‐automatic approach to completing large holes in a mesh. The reconstruction of the missing features in such holes is usually ambiguous. Thus, unsupervised methods may produce unsatisfactory results. To overcome this problem, we let the user indicate constraints by providing merely four points per important feature curve on the mesh. Our algorithm regards this input as an indication of an important broken feature curve. Our completion is formulated as a global energy minimization problem, with user‐defined spatial‐coherence constraints, allows for completion that adheres to the existing features. We demonstrate the method on example problems that are not handled satisfactorily by fully automatic methods. 相似文献