全文获取类型
收费全文 | 597篇 |
免费 | 92篇 |
国内免费 | 60篇 |
专业分类
电工技术 | 11篇 |
综合类 | 81篇 |
化学工业 | 37篇 |
金属工艺 | 8篇 |
机械仪表 | 26篇 |
建筑科学 | 10篇 |
矿业工程 | 6篇 |
能源动力 | 4篇 |
轻工业 | 66篇 |
水利工程 | 3篇 |
石油天然气 | 6篇 |
武器工业 | 5篇 |
无线电 | 56篇 |
一般工业技术 | 41篇 |
冶金工业 | 14篇 |
原子能技术 | 7篇 |
自动化技术 | 368篇 |
出版年
2024年 | 3篇 |
2023年 | 10篇 |
2022年 | 18篇 |
2021年 | 19篇 |
2020年 | 17篇 |
2019年 | 13篇 |
2018年 | 25篇 |
2017年 | 13篇 |
2016年 | 26篇 |
2015年 | 17篇 |
2014年 | 32篇 |
2013年 | 36篇 |
2012年 | 50篇 |
2011年 | 50篇 |
2010年 | 43篇 |
2009年 | 36篇 |
2008年 | 53篇 |
2007年 | 37篇 |
2006年 | 37篇 |
2005年 | 38篇 |
2004年 | 31篇 |
2003年 | 23篇 |
2002年 | 17篇 |
2001年 | 18篇 |
2000年 | 13篇 |
1999年 | 9篇 |
1998年 | 8篇 |
1997年 | 6篇 |
1996年 | 9篇 |
1995年 | 9篇 |
1994年 | 3篇 |
1993年 | 4篇 |
1992年 | 5篇 |
1991年 | 5篇 |
1990年 | 1篇 |
1988年 | 1篇 |
1987年 | 1篇 |
1986年 | 2篇 |
1984年 | 1篇 |
1982年 | 2篇 |
1981年 | 3篇 |
1978年 | 4篇 |
1975年 | 1篇 |
排序方式: 共有749条查询结果,搜索用时 15 毫秒
61.
通用扫描线多边形填充算法 总被引:4,自引:2,他引:4
甘泉 《计算机工程与应用》2000,36(2):57-59
传统的扫描线多边形填充算法只适用于水平扫描线的逐行填充。文章提出通用扫描线多边形填充算法,该算法可以有效地解决任意间距、任意倾角的扫描线对多边形的填充问题。通用扫描线多边形算法采用了坐标变换、浮点数舍入策略等重要方法。顶点扫描线号是该算法中的核心概念。 相似文献
62.
针对直接重构得到且以STL文件格式存储的网格模型质量不高的问题, 提出了一种基于Laplacian坐标的网格模型全局优化算法。该算法在提高三角面片质量的同时可以很好地保持原网格模型的局部几何特征, 其核心思想是通过在最小二乘意义下求解由权重控制的包含顶点位置和拉普拉斯坐标双重约束的线性系统来对网格顶点进行重新定位。从实验结果可以看出, 该算法较以往的Lapacian优化算法在对网格细节特征的保持上有一定优势。 相似文献
63.
64.
65.
Singer Philipp; Boison Detlev; M?hler Hanns; Feldon Joram; Yee Benjamin K. 《Canadian Metallurgical Quarterly》2007,121(5):815
Selective deletion of glycine transporter 1 (GlyT1) in forebrain neurons enhances N-methyl-D-aspartate receptor (NMDAR)-dependent neurotransmission and facilitates associative learning. These effects are attributable to increases in extracellular glycine availability in forebrain neurons due to reduced glycine re-uptake. Using a forebrain- and neuron-specific GlyT1-knockout mouse line (CamKIIαCre; GlyT1tm1.2fl/fI), the authors investigated whether this molecular intervention can affect recognition memory. In a spontaneous object recognition memory test, enhanced preference for a novel object was demonstrated in mutant mice relative to littermate control subjects at a retention interval of 2 hr, but not at 2 min. Furthermore, mutants were responsive to a switch in the relative spatial positions of objects, whereas control subjects were not. These potential procognitive effects were demonstrated against a lack of difference in contextual novelty detection: Mutant and control subjects showed equivalent preference for a novel over a familiar context. Results therefore extend the possible range of potential promnesic effects of specific forebrain neuronal GlyT1 deletion from associative learning to recognition memory and further support the possibility that mnemonic functions can be enhanced by reducing GlyT1 function. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
66.
We consider the problem of finding a cutset in a directed graph G=(V,E), i.e., a set of vertices that cuts all cycles in G . Finding a cutset of minimum cardinality is NP-hard. There exist several approximate and exact algorithms, most of them using graph reduction techniques. In this paper, we propose a constraint programming approach to cutset problems and design a global constraint for computing cutsets. This cutset constraint is a global constraint over boolean variables associated to the vertices of a given graph and states that the subgraph restricted to the vertices having their boolean variable set to true is acyclic. We propose a filtering algorithm based on graph contraction operations and inference of simple boolean constraints, that has a linear time complexity in O(|E|+|V|). We discuss search heuristics based on graph properties provided by the cutset constraint, and show the efficiency of the cutset constraint on benchmarks of the literature for pure minimum cutset problems, and on an application to log-based reconciliation problems where the global cutset constraint is mixed with other boolean constraints. 相似文献
67.
随机图点覆盖1度顶点核化算法分析 总被引:1,自引:0,他引:1
将随机图引入参数计算领域,利用随机图统计和概率分布等特性,从全局和整体上研究参数化点覆盖问题1度点核化过程中问题的核及度分布演变的内在机制和变化规律,并得出关于随机图1度点核化强度与顶点平均度关系及随机图点覆盖问题的决策与度分布关系的两个重要推论.最后分别从MIPS和BIND提取数据进行1度核化实验和分析.初步结果表明,对随机图点覆盖问题的分析方法不仅具有理论上的意义,而且随着问题随机度的大小而对问题有不同程度的把握能力. 相似文献
68.
We present methods to generate rendering sequences for triangle meshes which preserve mesh locality as much as possible. This is useful for maximizing vertex reuse when rendering the mesh using a FIFO vertex buffer, such as those available in modern 3D graphics hardware. The sequences are universal in the sense that they perform well for all sizes of vertex buffers, and generalize to progressive meshes. This has been verified experimentally. 相似文献
69.
In this paper we aim to characterize graphs in terms of a structural measure of complexity. Our idea is to decompose a graph into layered substructures of increasing size, and then to measure the information content of these substructures. To locate dominant substructures within a graph, we commence by identifying a centroid vertex which has the minimum shortest path length variance to the remaining vertices. For each graph a family of centroid expansion subgraphs is derived from the centroid vertex in order to capture dominant structural characteristics of the graph. Since the centroid vertex is identified through a global analysis of the shortest path length distribution, the expansion subgraphs provide a fine representation of a graph structure. We then show how to characterize graphs using depth-based complexity traces. Here we explore two different strategies. The first strategy is to measure how the entropies on the centroid expansion subgraphs vary with the increasing size of the subgraphs. The second strategy is to measure how the entropy differences vary with the increasing size of the subgraphs. We perform graph classification in the principal component space of the complexity trace vectors. Experiments on graph datasets abstracted from some bioinformatics and computer vision databases demonstrate the effectiveness and efficiency of the proposed graph complexity traces. Our methods are competitive to state of the art methods. 相似文献
70.
基于给定的一批离散点,提出了在初始的三角网格中插入新的控制点进而对三角网进行细分来重建物体的新算法。具体做法是首先根据点面之间的对应关系,建立相应的数据结构来生成初始的三角网格,再在初始的三角网格中借助Bezier 曲面生成新的插入点,进而对网格进行细分,接着利用对边的细分方法对细分后的三角网进行优化,最后在最终形成的网格上进行光照材质的设定来重建物体。实验证明这种方法可以更好地保留物体的细节,是可行有效的。 相似文献