全文获取类型
收费全文 | 4575篇 |
免费 | 164篇 |
国内免费 | 5篇 |
专业分类
电工技术 | 25篇 |
综合类 | 8篇 |
化学工业 | 1292篇 |
金属工艺 | 46篇 |
机械仪表 | 47篇 |
建筑科学 | 187篇 |
矿业工程 | 13篇 |
能源动力 | 67篇 |
轻工业 | 728篇 |
水利工程 | 27篇 |
石油天然气 | 9篇 |
无线电 | 122篇 |
一般工业技术 | 508篇 |
冶金工业 | 1115篇 |
原子能技术 | 24篇 |
自动化技术 | 526篇 |
出版年
2023年 | 27篇 |
2022年 | 190篇 |
2021年 | 207篇 |
2020年 | 87篇 |
2019年 | 103篇 |
2018年 | 111篇 |
2017年 | 85篇 |
2016年 | 101篇 |
2015年 | 95篇 |
2014年 | 136篇 |
2013年 | 264篇 |
2012年 | 178篇 |
2011年 | 228篇 |
2010年 | 194篇 |
2009年 | 161篇 |
2008年 | 216篇 |
2007年 | 193篇 |
2006年 | 158篇 |
2005年 | 137篇 |
2004年 | 121篇 |
2003年 | 101篇 |
2002年 | 121篇 |
2001年 | 77篇 |
2000年 | 79篇 |
1999年 | 93篇 |
1998年 | 96篇 |
1997年 | 65篇 |
1996年 | 79篇 |
1995年 | 52篇 |
1994年 | 60篇 |
1993年 | 58篇 |
1992年 | 62篇 |
1990年 | 53篇 |
1989年 | 74篇 |
1988年 | 46篇 |
1987年 | 51篇 |
1986年 | 56篇 |
1985年 | 55篇 |
1984年 | 63篇 |
1983年 | 45篇 |
1982年 | 31篇 |
1981年 | 47篇 |
1980年 | 27篇 |
1979年 | 23篇 |
1978年 | 22篇 |
1977年 | 22篇 |
1976年 | 33篇 |
1975年 | 27篇 |
1973年 | 17篇 |
1970年 | 18篇 |
排序方式: 共有4744条查询结果,搜索用时 15 毫秒
51.
Barbara Regine Armbruster 《Materials and Manufacturing Processes》2017,32(7-8):728-739
ABSTRACTThe gold work from the Western European Middle and Late Bronze Age (about 1500–700 BC) is characterized by solid ornaments and vessels. This article deals with manufacturing techniques of heavy gold jewelry by presenting a gold hoard found at Guînes, Pas-de-Calais, in Northern France, as a case study. In particular, three ornament types will be taken into consideration: (1) solid penannular neck and arm-rings, plain or with linear or geometric decoration; (2) flange-twisted ornaments that appear in different dimensions, as small as ear rings, as neck rings, up to the large size of a belt; (3) complex, composite ornaments. The technological aspects dealt with in this precious metal working context are manifold, including ingot and lost wax casting, hammering and bending of solid rods, the production of flange-twisted rods, chasing as decoration method, and finally joining techniques such as soldering, riveting and folding, and creasing. 相似文献
52.
Computer systems increasingly carry out tasks in mixed networks, that is in group settings in which they interact both with other computer systems and with people. Participants in these heterogeneous human-computer groups vary in their capabilities, goals, and strategies; they may cooperate, collaborate, or compete. The presence of people in mixed networks raises challenges for the design and the evaluation of decision-making strategies for computer agents. This paper describes several new decision-making models that represent, learn and adapt to various social attributes that influence people's decision-making and presents a novel approach to evaluating such models. It identifies a range of social attributes in an open-network setting that influence people's decision-making and thus affect the performance of computer-agent strategies, and establishes the importance of learning and adaptation to the success of such strategies. The settings vary in the capabilities, goals, and strategies that people bring into their interactions. The studies deploy a configurable system called Colored Trails (CT) that generates a family of games. CT is an abstract, conceptually simple but highly versatile game in which players negotiate and exchange resources to enable them to achieve their individual or group goals. It provides a realistic analogue to multi-agent task domains, while not requiring extensive domain modeling. It is less abstract than payoff matrices, and people exhibit less strategic and more helpful behavior in CT than in the identical payoff matrix decision-making context. By not requiring extensive domain modeling, CT enables agent researchers to focus their attention on strategy design, and it provides an environment in which the influence of social factors can be better isolated and studied. 相似文献
53.
Previous work has demonstrated that the use of structured abstracts can lead to greater completeness and clarity of information,
making it easier for researchers to extract information about a study. In academic year 2007/08, Durham University’s Computer Science Department revised the format of
the project report that final year students were required to write, from a ‘traditional dissertation’ format, using a conventional
abstract, to that of a 20-page technical paper, together with a structured abstract. This study set out to determine whether
inexperienced authors (students writing their final project reports for computing topics) find it easier to produce good abstracts, in terms of completeness and clarity, when using a structured form rather than a conventional form. We performed
a controlled quasi-experiment in which a set of ‘judges’ each assessed one conventional and one structured abstract for its
completeness and clarity. These abstracts were drawn from those produced by four cohorts of final year students: two preceding
the change, and the two following. The assessments were performed using a form of checklist that is similar to those used
for previous experimental studies. We used 40 abstracts (10 per cohort) and 20 student ‘judges’ to perform the evaluation.
Scored on a scale of 0.1–1.0, the mean for completeness increased from 0.37 to 0.61 when using a structured form. For clarity,
using a scale of 1–10, the mean score increased from 5.1 to 7.2. For a minimum goal of scoring 50% for both completeness and
clarity, only 3 from 19 conventional abstracts achieved this level, while only 3 from 20 structured abstracts failed to reach
it. We conclude that the use of a structured form for organising the material of an abstract can assist inexperienced authors
with writing technical abstracts that are clearer and more complete than those produced without the framework provided by
such a mechanism. 相似文献
54.
Yang Ruiduo Sarkar Sudeep Loeding Barbara 《IEEE transactions on pattern analysis and machine intelligence》2010,32(3):462-477
We consider two crucial problems in continuous sign language recognition from unaided video sequences. At the sentence level, we consider the movement epenthesis (me) problem and at the feature level, we consider the problem of hand segmentation and grouping. We construct a framework that can handle both of these problems based on an enhanced, nested version of the dynamic programming approach. To address movement epenthesis, a dynamic programming (DP) process employs a virtual me option that does not need explicit models. We call this the enhanced level building (eLB) algorithm. This formulation also allows the incorporation of grammar models. Nested within this eLB is another DP that handles the problem of selecting among multiple hand candidates. We demonstrate our ideas on four American Sign Language data sets with simple background, with the signer wearing short sleeves, with complex background, and across signers. We compared the performance with Conditional Random Fields (CRF) and Latent Dynamic-CRF-based approaches. The experiments show more than 40 percent improvement over CRF or LDCRF approaches in terms of the frame labeling rate. We show the flexibility of our approach when handling a changing context. We also find a 70 percent improvement in sign recognition rate over the unenhanced DP matching algorithm that does not accommodate the me effect. 相似文献
55.
The use of web-based learning and assessment tools is growing in tertiary institutions around the world. To date, very few papers have reported the development and evaluation of a web-based formative assessment tool for postgraduate students. The aim of the present paper was to report on the development and evaluation of an online formative assessment tool for this student group. The web-based formative assessment tool was evaluated by a sample of undergraduate students, postgraduate students and academic staff within a psychology department in order to determine the suitability and sensitivity of the tool. The results of this pilot test suggest that the development of such a tool is both appropriate and feasible for Masters students studying psychology. 相似文献
56.
ContextThe knowledge about particular characteristics of software that are indicators for defects is very valuable for testers because it helps them to focus the testing effort and to allocate their limited resources appropriately.ObjectiveIn this paper, we explore the relationship between several historical characteristics of files and their defect count.MethodFor this purpose, we propose an empirical approach that uses statistical procedures and visual representations of the data in order to determine indicators for a file’s defect count. We apply this approach to nine open source Java projects across different versions.ResultsOnly 4 of 9 programs show moderate correlations between a file’s defects in previous and in current releases in more than half of the analysed releases. In contrast to our expectations, the oldest files represent the most fault-prone files. Additionally, late changes correlate with a file’s defect count only partly. The number of changes, the number of distinct authors performing changes to a file as well as the file’s age are good indicators for a file’s defect count in all projects.ConclusionOur results show that a software’s history is a good indicator for ist quality. We did not find one indicator that persists across all projects in an equal manner. Nevertheless, there are several indicators that show significant strong correlations in nearly all projects: DA (number of distinct authors) and FC (frequency of change). In practice, for each software, statistical analyses have to be performed in order to evaluate the best indicator(s) for a file’s defect count. 相似文献
57.
Interactive optimization algorithms use real–time interaction to include decision maker preferences based on the subjective quality of evolving solutions. In water resources management problems where numerous qualitative criteria exist, use of such interactive optimization methods can facilitate in the search for comprehensive and meaningful solutions for the decision maker. The decision makers using such a system are, however, likely to go through their own learning process as they view new solutions and gain knowledge about the design space. This leads to temporal changes (nonstationarity) in their preferences that can impair the performance of interactive optimization algorithms. This paper proposes a new interactive optimization algorithm – Case-Based Micro Interactive Genetic Algorithm – that uses a case-based memory and case-based reasoning to manage the effects of nonstationarity in decision maker’s preferences within the search process without impairing the performance of the search algorithm. This paper focuses on exploring the advantages of such an approach within the domain of groundwater monitoring design, though it is applicable to many other problems. The methodology is tested under non-stationary preference conditions using simulated and real human decision makers, and it is also compared with a non-interactive genetic algorithm and a previous version of the interactive genetic algorithm. 相似文献
58.
Andrej GisbrechtAuthor VitaeBassam MokbelAuthor Vitae Barbara HammerAuthor Vitae 《Neurocomputing》2011,74(9):1359-1371
The generative topographic mapping (GTM) has been proposed as a statistical model to represent high-dimensional data by a distribution induced by a sparse lattice of points in a low-dimensional latent space, such that visualization, compression, and data inspection become possible. The formulation in terms of a generative statistical model has the benefit that relevant parameters of the model can be determined automatically based on an expectation maximization scheme. Further, the model offers a large flexibility such as a direct out-of-sample extension and the possibility to obtain different degrees of granularity of the visualization without the need of additional training. Original GTM is restricted to Euclidean data points in a given Euclidean vector space. Often, data are not explicitly embedded in a Euclidean vector space, rather pairwise dissimilarities of data can be computed, i.e. the relations between data points are given rather than the data vectors themselves. We propose a method which extends the GTM to relational data and which allows us to achieve a sparse representation of data characterized by pairwise dissimilarities, in latent space. The method, relational GTM, is demonstrated on several benchmarks. 相似文献
59.
Bruno Rossi Barbara Russo Giancarlo Succi 《Information and Software Technology》2011,53(11):1209-1226
Context
Adopting IT innovation in organizations is a complex decision process driven by technical, social and economic issues. Thus, those organizations that decide to adopt innovation take a decision of uncertain success of implementation, as the actual use of a new technology might not be the one expected. The misalignment between planned and effective use of innovation is called assimilation gap.Objective
This research aims at defining a quantitative instrument for measuring the assimilation gap and applying it to the case of the adoption of OSS.Method
In this paper, we use the theory of path dependence and increasing returns of Arthur. In particular, we model the use of software applications (planned or actual) by stochastic processes defined by the daily amounts of files created with the applications. We quantify the assimilation gap by comparing the resulting models by measures of proximity.Results
We apply and validate our method to a real case study of introduction of OpenOffice. We have found a gap between the planned and the effective use despite well-defined directives to use the new OS technology. These findings suggest a need of strategy re-calibration that takes into account environmental factors and individual attitudes.Conclusions
The theory of path dependence is a valid instrument to model the assimilation gap provided information on strategy toward innovation and quantitative data on actual use are available. 相似文献60.
Six different methods to calculate the Strain Index (SI) scores for jobs with multiple forces/tasks were developed. Exposure data of 733 subjects from 12 different worksites were used to calculate these SI scores. Results show that using different SI computation methods could result in different SI scores, hence different risk level classifications. However, some simpler methods generated SI scores were comparable to the more complicated composite SI method. Despite differences in the scores between the six different SI computation methods, Spearman rank-order correlation coefficients of 0.61-0.97 were found between the methods. With some confidence, ergonomic practitioners may use simpler methods, depending on their specificity requirement in job evaluations and available resources. Some SI computation methods may tend to over-estimate job risk levels, while others may tend to under-estimate job risk levels, due to different ways used in obtaining the various SI parameters and computations. 相似文献