全文获取类型
收费全文 | 5569篇 |
免费 | 218篇 |
国内免费 | 23篇 |
专业分类
电工技术 | 61篇 |
综合类 | 8篇 |
化学工业 | 1241篇 |
金属工艺 | 182篇 |
机械仪表 | 123篇 |
建筑科学 | 144篇 |
矿业工程 | 9篇 |
能源动力 | 266篇 |
轻工业 | 314篇 |
水利工程 | 34篇 |
石油天然气 | 15篇 |
无线电 | 528篇 |
一般工业技术 | 1261篇 |
冶金工业 | 850篇 |
原子能技术 | 34篇 |
自动化技术 | 740篇 |
出版年
2023年 | 52篇 |
2022年 | 99篇 |
2021年 | 175篇 |
2020年 | 124篇 |
2019年 | 124篇 |
2018年 | 176篇 |
2017年 | 143篇 |
2016年 | 150篇 |
2015年 | 103篇 |
2014年 | 153篇 |
2013年 | 370篇 |
2012年 | 220篇 |
2011年 | 297篇 |
2010年 | 227篇 |
2009年 | 260篇 |
2008年 | 202篇 |
2007年 | 208篇 |
2006年 | 162篇 |
2005年 | 148篇 |
2004年 | 154篇 |
2003年 | 134篇 |
2002年 | 124篇 |
2001年 | 106篇 |
2000年 | 94篇 |
1999年 | 103篇 |
1998年 | 151篇 |
1997年 | 106篇 |
1996年 | 102篇 |
1995年 | 91篇 |
1994年 | 89篇 |
1993年 | 93篇 |
1992年 | 90篇 |
1991年 | 72篇 |
1990年 | 57篇 |
1989年 | 73篇 |
1988年 | 65篇 |
1987年 | 50篇 |
1986年 | 54篇 |
1985年 | 57篇 |
1984年 | 62篇 |
1983年 | 64篇 |
1982年 | 49篇 |
1981年 | 46篇 |
1980年 | 46篇 |
1979年 | 22篇 |
1978年 | 29篇 |
1977年 | 29篇 |
1976年 | 52篇 |
1974年 | 20篇 |
1973年 | 26篇 |
排序方式: 共有5810条查询结果,搜索用时 15 毫秒
141.
Summary The amount of nondeterminism in a nondeterministic finite automaton (NFA) is measured by counting the minimal number of guessing points a string w has to pass through on its way to an accepting state. NFA's with more nondeterminism can achieve greater savings in the number of states over their deterministic counterparts than NFA's with less nondeterminism. On the other hand, for some nontrivial infinite regular languages a deterministic finite automaton (DFA) can already be quite succinct in the sense that NFA's need as many states (and even context-free grammars need as many nonterminals) as the minimal DFA has states.This research was supported in part by the National Science Foundation under Grant No. MCS 76-10076 相似文献
142.
Three results are established. The first is that every nondeterministic strict interpretation of a deterministic pushdown acceptor (dpda) has an equivalent, deterministic, strict interpretation. The second is that ifM
1 andM
2 are two compatible strict interpretations of the dpdaM, then there exist deterministic strict interpretationsM
andM
such thatL(M
) =L(M
1)L(M
2) andL(M
) =L(M
1)L(M
2). The third states that there is no dpda whose strict interpretations yield all the deterministic context-free languages.This author was supported in part by the National Science Foundation under Grant MCS-77-22323. 相似文献
143.
Ray Radebaugh 《Journal of Low Temperature Physics》1977,27(1-2):91-105
The effects of transverse magnetic fields up to 955 kA/m (12 kOe) on the electrical and thermal conductivities of single-crystal beryllium have been measured between 2 and 300 K. Most of the measurements were made on a sample with a resistance ratio of1340. This sample was pure enough so that the intrinsic electronic thermal resistivity could be measured for the first time. It was found to have the usual T2 behavior. The current and heat flow were along the hexagonalc axis of the crystal, while the thermal and electrical conductivities were studied as a function of the angle of the magnetic field in the basal plane. Below about 50 K the thermal conductivity could be reduced by several orders of magnitude by applying the magnetic field. The lattice conductivity, extrapolated from the measurements in the magnetic field, is given by k =T2, where =1.6×10–4 W/cm K3. This value is in reasonable agreement with that obtained from measurements of beryllium alloys. The use of single-crystal beryllium as a heat switch for temperatures below about 30 K is discussed.This research was supported by the Advanced Research Projects Agency of the Department of Defense and was monitored by ONR under Government Order Number NAonr-1-75. 相似文献
144.
Given an undirected graph G with edge costs and a specified set of terminals, let the density of any subgraph be the ratio of its cost to the number of terminals it contains. If G is 2-connected, does it contain smaller 2-connected subgraphs of density comparable to that of?G? We answer this question in the affirmative by giving an algorithm to pruneG and find such subgraphs of any desired size, incurring only a logarithmic factor increase in density (plus a small additive term). We apply our pruning techniques to give algorithms for two NP-Hard problems on finding large 2-vertex-connected subgraphs of low cost; no previous approximation algorithm was known for either problem. In the k-2VC problem, we are given an undirected graph G with edge costs and an integer k; the goal is to find a minimum-cost 2-vertex-connected subgraph of G containing at least k vertices. In the Budget-2VC problem, we are given a graph G with edge costs, and a budget B; the goal is to find a 2-vertex-connected subgraph H of G with total edge cost at most B that maximizes the number of vertices in H. We describe an O(log?nlog?k) approximation for the k-2VC problem, and a bicriteria approximation for the Budget-2VC problem that gives an $O(\frac{1}{\epsilon}\log^{2} n)$ approximation, while violating the budget by a factor of at most 2+ε. 相似文献
145.
Ray Pastore 《Computers & Education》2012,58(1):641-651
Can increasing the speed of audio narration in multimedia instruction decrease training time and still maintain learning? The purpose of this study was to examine the effects of time-compressed instruction and redundancy on learning and learners’ perceptions of cognitive load. 154 university students were placed into conditions that consisted of time-compression (0%, 25%, or 50%) and redundancy (redundant text and narration or narration only). Participants were presented with multimedia instruction on the human heart and its parts then given factual and problem solving knowledge tests, a cognitive load measure, and a review behavior (back and replay buttons) measure. Results of the study indicated that participants who were presented 0% and 25% compression obtained similar scores on both the factual and problem solving measures. Additionally, they indicated similar levels of cognitive load. Participants who were presented redundant instruction were not able to perform as well as participants presented non-redundant instruction. 相似文献
146.
Facility location decisions are usually determined by cost and coverage related factors although empirical studies show that such factors as infrastructure, labor conditions and competition also play an important role in practice. The objective of this paper is to develop a multi-objective facility location model accounting for a wide range of factors affecting decision-making. The proposed model selects potential facilities from a set of pre-defined alternative locations according to the number of customers, the number of competitors and real-estate cost criteria. However, that requires large amount of both spatial and non-spatial input data, which could be acquired from distributed data sources over the Internet. Therefore, a computational approach for processing input data and representation of modeling results is elaborated. It is capable of accessing and processing data from heterogeneous spatial and non-spatial data sources. Application of the elaborated data gathering approach and facility location model is demonstrated using an example of fast food restaurants location problem. 相似文献
147.
Under-segmentation of an image with multiple objects is a common problem in image segmentation algorithms. This paper presents a novel approach for splitting clumps formed by multiple objects due to under-segmentation. The proposed algorithm includes three steps: (1) decide whether to split a candidate connected component by application-specific shape classification; (2) find a pair of points for clump splitting and (3) join the pair of selected points. In the first step, a shape classifier is applied to determine whether a connected component should be split. In the second step, a pair of points for splitting is detected using a bottleneck rule, under the assumption that the desired objects have roughly a convex shape. In the third step, the selected splitting points from step two are joined by finding the optimal splitting line between them, based on minimizing an image energy. The shape classifier is built offline via various shape features and a support vector machine. Steps two and three are application-independent. The performance of this method is evaluated using images from various applications. Experimental results show that the proposed approach outperforms the state-of-the-art algorithms for the clump splitting problem. 相似文献
148.
Amir Talaei-Khoei Terje Solvoll Pradeep Ray Nandan Parameshwaran 《Journal of Computer and System Sciences》2012,78(1):370-391
The field of computer supported cooperative work aims at providing information technology models, methods, and tools that assist individuals to cooperate. The presented paper is based on three main observations from literature. First, one of the problems in utilizing information technology for cooperation is to identify the relevance of information, called awareness. Second, research in computer supported cooperative work proposes the use of agent technologies to aid individuals to maintain their awareness. Third, literature lacks the formalized methods on how software agents can identify awareness. This paper addresses the problem of awareness identification. The main contribution of this paper is to propose and evaluate a formalized structure, called Policy-based Awareness Management (PAM). PAM extends the logic of general awareness in order to identify relevance of information. PAM formalizes existing policies into Directory Enabled Networks-next generation structure and uses them as a source for awareness identification. The formalism is demonstrated by applying PAM to the space shuttle Columbia disaster occurred in 2003. The paper also argues that efficacy and cost-efficiency of the logic of general awareness will be increased by PAM. This is evaluated by simulation of hypothetical scenarios as well as a case study. 相似文献
149.
Subir Kumar Ghosh Partha Pratim Goswami Anil Maheshwari Subhas Chandra Nandy Sudebkumar Prasant Pal Swami Sarvattomananda 《The Visual computer》2012,28(12):1229-1237
Let s be a point source of light inside a polygon P of n vertices. A polygonal path from s to some point t inside P is called a diffuse reflection path if the turning points of the path lie on edges of?P. A?diffuse reflection path is said to be optimal if it has the minimum number of reflections on the path. The problem of computing a diffuse reflection path from s to t inside P has not been considered explicitly in the past. We present three different algorithms for this problem which produce suboptimal paths. For constructing such a path, the first algorithm uses a greedy method, the second algorithm uses a transformation of a minimum link path, and the third algorithm uses the edge–edge visibility graph of?P. The first two algorithms are for polygons without holes, and they run in O(n+klogn) time, where k denotes the number of reflections in the constructed path. The third algorithm is for polygons with or without holes, and it runs in O(n 2) time. The number of reflections in the path produced by this third algorithm can be at most three times that of an optimal diffuse reflection path. Though the combinatorial approach used in the third algorithm gives a better bound on the number of reflections on the path, the first and the second algorithms stand on the merit of their elegant geometric approaches based on local geometric information. 相似文献
150.
Business processes leave trails in a variety of data sources (e.g., audit trails, databases, and transaction logs). Hence, every process instance can be described by a trace, i.e., a sequence of events. Process mining techniques are able to extract knowledge from such traces and provide a welcome extension to the repertoire of business process analysis techniques. Recently, process mining techniques have been adopted in various commercial BPM systems (e.g., BPM|one, Futura Reflect, ARIS PPM, Fujitsu Interstage, Businesscape, Iontas PDF, and QPR PA). Unfortunately, traditional process discovery algorithms have problems dealing with less structured processes. The resulting models are difficult to comprehend or even misleading. Therefore, we propose a new approach based on trace alignment. The goal is to align traces in such a way that event logs can be explored easily. Trace alignment can be used to explore the process in the early stages of analysis and to answer specific questions in later stages of analysis. Hence, it complements existing process mining techniques focusing on discovery and conformance checking. The proposed techniques have been implemented as plugins in the ProM framework. We report the results of trace alignment on one synthetic and two real-life event logs, and show that trace alignment has significant promise in process diagnostic efforts. 相似文献