首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   4348篇
  免费   191篇
  国内免费   5篇
电工技术   21篇
综合类   9篇
化学工业   1140篇
金属工艺   49篇
机械仪表   46篇
建筑科学   187篇
矿业工程   13篇
能源动力   67篇
轻工业   732篇
水利工程   27篇
石油天然气   9篇
无线电   127篇
一般工业技术   487篇
冶金工业   1091篇
原子能技术   24篇
自动化技术   515篇
  2023年   20篇
  2022年   138篇
  2021年   202篇
  2020年   82篇
  2019年   99篇
  2018年   108篇
  2017年   85篇
  2016年   102篇
  2015年   95篇
  2014年   135篇
  2013年   260篇
  2012年   175篇
  2011年   227篇
  2010年   192篇
  2009年   158篇
  2008年   212篇
  2007年   193篇
  2006年   149篇
  2005年   134篇
  2004年   121篇
  2003年   97篇
  2002年   122篇
  2001年   72篇
  2000年   66篇
  1999年   85篇
  1998年   82篇
  1997年   54篇
  1996年   74篇
  1995年   51篇
  1994年   55篇
  1993年   56篇
  1992年   62篇
  1990年   50篇
  1989年   69篇
  1988年   45篇
  1987年   51篇
  1986年   55篇
  1985年   53篇
  1984年   58篇
  1983年   45篇
  1982年   30篇
  1981年   48篇
  1980年   26篇
  1979年   23篇
  1978年   22篇
  1977年   22篇
  1976年   30篇
  1975年   26篇
  1973年   16篇
  1970年   17篇
排序方式: 共有4544条查询结果,搜索用时 984 毫秒
81.
Computer systems increasingly carry out tasks in mixed networks, that is in group settings in which they interact both with other computer systems and with people. Participants in these heterogeneous human-computer groups vary in their capabilities, goals, and strategies; they may cooperate, collaborate, or compete. The presence of people in mixed networks raises challenges for the design and the evaluation of decision-making strategies for computer agents. This paper describes several new decision-making models that represent, learn and adapt to various social attributes that influence people's decision-making and presents a novel approach to evaluating such models. It identifies a range of social attributes in an open-network setting that influence people's decision-making and thus affect the performance of computer-agent strategies, and establishes the importance of learning and adaptation to the success of such strategies. The settings vary in the capabilities, goals, and strategies that people bring into their interactions. The studies deploy a configurable system called Colored Trails (CT) that generates a family of games. CT is an abstract, conceptually simple but highly versatile game in which players negotiate and exchange resources to enable them to achieve their individual or group goals. It provides a realistic analogue to multi-agent task domains, while not requiring extensive domain modeling. It is less abstract than payoff matrices, and people exhibit less strategic and more helpful behavior in CT than in the identical payoff matrix decision-making context. By not requiring extensive domain modeling, CT enables agent researchers to focus their attention on strategy design, and it provides an environment in which the influence of social factors can be better isolated and studied.  相似文献   
82.
Previous work has demonstrated that the use of structured abstracts can lead to greater completeness and clarity of information, making it easier for researchers to extract information about a study. In academic year 2007/08, Durham University’s Computer Science Department revised the format of the project report that final year students were required to write, from a ‘traditional dissertation’ format, using a conventional abstract, to that of a 20-page technical paper, together with a structured abstract. This study set out to determine whether inexperienced authors (students writing their final project reports for computing topics) find it easier to produce good abstracts, in terms of completeness and clarity, when using a structured form rather than a conventional form. We performed a controlled quasi-experiment in which a set of ‘judges’ each assessed one conventional and one structured abstract for its completeness and clarity. These abstracts were drawn from those produced by four cohorts of final year students: two preceding the change, and the two following. The assessments were performed using a form of checklist that is similar to those used for previous experimental studies. We used 40 abstracts (10 per cohort) and 20 student ‘judges’ to perform the evaluation. Scored on a scale of 0.1–1.0, the mean for completeness increased from 0.37 to 0.61 when using a structured form. For clarity, using a scale of 1–10, the mean score increased from 5.1 to 7.2. For a minimum goal of scoring 50% for both completeness and clarity, only 3 from 19 conventional abstracts achieved this level, while only 3 from 20 structured abstracts failed to reach it. We conclude that the use of a structured form for organising the material of an abstract can assist inexperienced authors with writing technical abstracts that are clearer and more complete than those produced without the framework provided by such a mechanism.  相似文献   
83.
We consider two crucial problems in continuous sign language recognition from unaided video sequences. At the sentence level, we consider the movement epenthesis (me) problem and at the feature level, we consider the problem of hand segmentation and grouping. We construct a framework that can handle both of these problems based on an enhanced, nested version of the dynamic programming approach. To address movement epenthesis, a dynamic programming (DP) process employs a virtual me option that does not need explicit models. We call this the enhanced level building (eLB) algorithm. This formulation also allows the incorporation of grammar models. Nested within this eLB is another DP that handles the problem of selecting among multiple hand candidates. We demonstrate our ideas on four American Sign Language data sets with simple background, with the signer wearing short sleeves, with complex background, and across signers. We compared the performance with Conditional Random Fields (CRF) and Latent Dynamic-CRF-based approaches. The experiments show more than 40 percent improvement over CRF or LDCRF approaches in terms of the frame labeling rate. We show the flexibility of our approach when handling a changing context. We also find a 70 percent improvement in sign recognition rate over the unenhanced DP matching algorithm that does not accommodate the me effect.  相似文献   
84.
In this paper, we investigate the application of Evolving Trees (ET) for the analysis of mass spectrometric data of bacteria. Evolving Trees are extensions of self-organizing maps (SOMs) developed for hierarchical classification systems. Therefore, they are well suited for taxonomic problems such as the identification of bacteria. Here, we focus on three topics, an appropriate pre-processing and encoding of the spectra, an adequate data model by means of a hierarchical Evolving Tree and an interpretable visualization. First, the high dimensionality of the data is reduced by a compact representation. Here, we employ sparse coding, specifically tailored for the processing of mass spectra. In the second step, the topographic information which is expected in the fingerprints is used for advanced tree evaluation and analysis. We adapted the original topographic product for SOMs for ET to achieve a judgment of topography. Additionally we transferred the concept of U-matrix for evaluation of the separability of SOMs to their analog in ET. We demonstrate these extensions for two mass spectrometric data sets of bacteria fingerprints and show their classification and evaluation capabilities in comparison to state of the art techniques.  相似文献   
85.
The use of web-based learning and assessment tools is growing in tertiary institutions around the world. To date, very few papers have reported the development and evaluation of a web-based formative assessment tool for postgraduate students. The aim of the present paper was to report on the development and evaluation of an online formative assessment tool for this student group. The web-based formative assessment tool was evaluated by a sample of undergraduate students, postgraduate students and academic staff within a psychology department in order to determine the suitability and sensitivity of the tool. The results of this pilot test suggest that the development of such a tool is both appropriate and feasible for Masters students studying psychology.  相似文献   
86.
ContextThe knowledge about particular characteristics of software that are indicators for defects is very valuable for testers because it helps them to focus the testing effort and to allocate their limited resources appropriately.ObjectiveIn this paper, we explore the relationship between several historical characteristics of files and their defect count.MethodFor this purpose, we propose an empirical approach that uses statistical procedures and visual representations of the data in order to determine indicators for a file’s defect count. We apply this approach to nine open source Java projects across different versions.ResultsOnly 4 of 9 programs show moderate correlations between a file’s defects in previous and in current releases in more than half of the analysed releases. In contrast to our expectations, the oldest files represent the most fault-prone files. Additionally, late changes correlate with a file’s defect count only partly. The number of changes, the number of distinct authors performing changes to a file as well as the file’s age are good indicators for a file’s defect count in all projects.ConclusionOur results show that a software’s history is a good indicator for ist quality. We did not find one indicator that persists across all projects in an equal manner. Nevertheless, there are several indicators that show significant strong correlations in nearly all projects: DA (number of distinct authors) and FC (frequency of change). In practice, for each software, statistical analyses have to be performed in order to evaluate the best indicator(s) for a file’s defect count.  相似文献   
87.
Interactive optimization algorithms use real–time interaction to include decision maker preferences based on the subjective quality of evolving solutions. In water resources management problems where numerous qualitative criteria exist, use of such interactive optimization methods can facilitate in the search for comprehensive and meaningful solutions for the decision maker. The decision makers using such a system are, however, likely to go through their own learning process as they view new solutions and gain knowledge about the design space. This leads to temporal changes (nonstationarity) in their preferences that can impair the performance of interactive optimization algorithms. This paper proposes a new interactive optimization algorithm – Case-Based Micro Interactive Genetic Algorithm – that uses a case-based memory and case-based reasoning to manage the effects of nonstationarity in decision maker’s preferences within the search process without impairing the performance of the search algorithm. This paper focuses on exploring the advantages of such an approach within the domain of groundwater monitoring design, though it is applicable to many other problems. The methodology is tested under non-stationary preference conditions using simulated and real human decision makers, and it is also compared with a non-interactive genetic algorithm and a previous version of the interactive genetic algorithm.  相似文献   
88.
Systematic literature reviews (SLRs) are a major tool for supporting evidence-based software engineering. Adapting the procedures involved in such a review to meet the needs of software engineering and its literature remains an ongoing process. As part of this process of refinement, we undertook two case studies which aimed 1) to compare the use of targeted manual searches with broad automated searches and 2) to compare different methods of reaching a consensus on quality. For Case 1, we compared a tertiary study of systematic literature reviews published between January 1, 2004 and June 30, 2007 which used a manual search of selected journals and conferences and a replication of that study based on a broad automated search. We found that broad automated searches find more studies than manual restricted searches, but they may be of poor quality. Researchers undertaking SLRs may be justified in using targeted manual searches if they intend to omit low quality papers, or they are assessing research trends in research methodologies. For Case 2, we analyzed the process used to evaluate the quality of SLRs. We conclude that if quality evaluation of primary studies is a critical component of a specific SLR, assessments should be based on three independent evaluators incorporating at least two rounds of discussion.  相似文献   
89.
The generative topographic mapping (GTM) has been proposed as a statistical model to represent high-dimensional data by a distribution induced by a sparse lattice of points in a low-dimensional latent space, such that visualization, compression, and data inspection become possible. The formulation in terms of a generative statistical model has the benefit that relevant parameters of the model can be determined automatically based on an expectation maximization scheme. Further, the model offers a large flexibility such as a direct out-of-sample extension and the possibility to obtain different degrees of granularity of the visualization without the need of additional training. Original GTM is restricted to Euclidean data points in a given Euclidean vector space. Often, data are not explicitly embedded in a Euclidean vector space, rather pairwise dissimilarities of data can be computed, i.e. the relations between data points are given rather than the data vectors themselves. We propose a method which extends the GTM to relational data and which allows us to achieve a sparse representation of data characterized by pairwise dissimilarities, in latent space. The method, relational GTM, is demonstrated on several benchmarks.  相似文献   
90.

Context

Adopting IT innovation in organizations is a complex decision process driven by technical, social and economic issues. Thus, those organizations that decide to adopt innovation take a decision of uncertain success of implementation, as the actual use of a new technology might not be the one expected. The misalignment between planned and effective use of innovation is called assimilation gap.

Objective

This research aims at defining a quantitative instrument for measuring the assimilation gap and applying it to the case of the adoption of OSS.

Method

In this paper, we use the theory of path dependence and increasing returns of Arthur. In particular, we model the use of software applications (planned or actual) by stochastic processes defined by the daily amounts of files created with the applications. We quantify the assimilation gap by comparing the resulting models by measures of proximity.

Results

We apply and validate our method to a real case study of introduction of OpenOffice. We have found a gap between the planned and the effective use despite well-defined directives to use the new OS technology. These findings suggest a need of strategy re-calibration that takes into account environmental factors and individual attitudes.

Conclusions

The theory of path dependence is a valid instrument to model the assimilation gap provided information on strategy toward innovation and quantitative data on actual use are available.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号