首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   3858篇
  免费   231篇
  国内免费   3篇
电工技术   30篇
综合类   6篇
化学工业   1085篇
金属工艺   39篇
机械仪表   63篇
建筑科学   226篇
矿业工程   3篇
能源动力   67篇
轻工业   637篇
水利工程   38篇
石油天然气   18篇
无线电   160篇
一般工业技术   578篇
冶金工业   636篇
原子能技术   12篇
自动化技术   494篇
  2023年   26篇
  2022年   102篇
  2021年   141篇
  2020年   63篇
  2019年   90篇
  2018年   100篇
  2017年   89篇
  2016年   125篇
  2015年   99篇
  2014年   160篇
  2013年   246篇
  2012年   208篇
  2011年   295篇
  2010年   195篇
  2009年   172篇
  2008年   201篇
  2007年   178篇
  2006年   185篇
  2005年   115篇
  2004年   129篇
  2003年   110篇
  2002年   94篇
  2001年   59篇
  2000年   61篇
  1999年   55篇
  1998年   51篇
  1997年   54篇
  1996年   55篇
  1995年   50篇
  1994年   44篇
  1993年   41篇
  1992年   43篇
  1991年   40篇
  1990年   49篇
  1989年   33篇
  1988年   36篇
  1987年   31篇
  1986年   23篇
  1985年   23篇
  1984年   37篇
  1983年   13篇
  1982年   26篇
  1981年   7篇
  1980年   18篇
  1979年   24篇
  1978年   16篇
  1977年   9篇
  1976年   7篇
  1975年   13篇
  1974年   8篇
排序方式: 共有4092条查询结果,搜索用时 15 毫秒
61.
In this paper, two nonlinear optimization methods for the identification of nonlinear systems are compared. Both methods estimate the parameters of e.g. a polynomial nonlinear state-space model by means of a nonlinear least-squares optimization of the same cost function. While the first method does not estimate the states explicitly, the second method estimates both states and parameters adding an extra constraint equation. Both methods are introduced and their similarities and differences are discussed utilizing simulation data. The unconstrained method appears to be faster and more memory efficient, but the constrained method has a significant advantage as well: it is robust for unstable systems of which bounded input-output data can be measured (e.g. a system captured in a stabilizing feedback loop). Both methods have successfully been applied on real-life measurement data.  相似文献   
62.
Located in socially and culturally diverse neighborhoods, we have built a network of intercultural computer clubs, called come_IN. These clubs offer a place to share practices among children and adults of diverse ethnical backgrounds. We show how this initiative ties into the striving for the integration of migrant communities and host society in Germany. In this paper, we analyze how collaborative project work and the use of mobile media and technologies contribute to integration processes in multicultural neighborhoods. Qualitative data gathered from interviews with club participants, participative observation in the computer clubs, as well as the analysis of artifacts created during project work provides the background needed to match local needs and peculiarities with (mobile) technologies. Based on these findings we present two approaches to add to the technological infrastructure: (1) a mesh-network extending the clubs into the neighborhood and (2) a project management tool, which supports projects and stimulates the sharing of ideas among projects.  相似文献   
63.
Model-based performance evaluation methods for software architectures can help architects to assess design alternatives and save costs for late life-cycle performance fixes. A recent trend is component-based performance modelling, which aims at creating reusable performance models; a number of such methods have been proposed during the last decade. Their accuracy and the needed effort for modelling are heavily influenced by human factors, which are so far hardly understood empirically. Do component-based methods allow to make performance predictions with a comparable accuracy while saving effort in a reuse scenario? We examined three monolithic methods (SPE, umlPSI, Capacity Planning (CP)) and one component-based performance evaluation method (PCM) with regard to their accuracy and effort from the viewpoint of method users. We conducted a series of three experiments (with different levels of control) involving 47 computer science students. In the first experiment, we compared the applicability of the monolithic methods in order to choose one of them for comparison. In the second experiment, we compared the accuracy and effort of this monolithic and the component-based method for the model creation case. In the third, we studied the effort reduction from reusing component-based models. Data were collected based on the resulting artefacts, questionnaires and screen recording. They were analysed using hypothesis testing, linear models, and analysis of variance. For the monolithic methods, we found that using SPE and CP resulted in accurate predictions, while umlPSI produced over-estimates. Comparing the component-based method PCM with SPE, we found that creating reusable models using PCM takes more (but not drastically more) time than using SPE and that participants can create accurate models with both techniques. Finally, we found that reusing PCM models can save time, because effort to reuse can be explained by a model that is independent of the inner complexity of a component. The tasks performed in our experiments reflect only a subset of the actual activities when applying model-based performance evaluation methods in a software development process. Our results indicate that sufficient prediction accuracy can be achieved with both monolithic and component-based methods, and that the higher effort for component-based performance modelling will indeed pay off when the component models incorporate and hide a sufficient amount of complexity.  相似文献   
64.
Can an electronic portfolio that is both a multimedia container for student work and a tool to support key learning processes have a positive impact on the literacy practices and self-regulated learning skills of students? This article presents the findings of a yearlong study conducted in three Canadian provinces during the 2007–2008 school year initially involving 32 teachers and 388 students. Due to varying levels of implementation our final data set included 14 teachers and 296 students. Using a non-equivalent pre-test/post-test design, we found that grade 4–6 students who were in classrooms where the teacher provided regular and appropriate use of the electronic portfolio tool ePEARL (i.e., medium–high implementation condition, n = 7 classrooms and 121 students), compared to control students (n = 7 classrooms and 175 students) who did not use ePEARL, showed significant improvements (p < .05) in their writing skills on a standardized literacy measure (i.e., the constructed response subtest of the Canadian Achievement Test-4th ed.) and certain metacognitive skills measured via student self-report. The results of this study indicate that teaching with ePEARL has positive impacts on students’ literacy and self-regulated learning skills when the tool is used regularly and integrated into classroom instruction.  相似文献   
65.
A theoretical model of professional identification is developed and empirically examined as a means to understanding information technology (IT) workers’ attachment to the IT profession. Professional identification represents oneness with or belonging to a profession and provides a unique means of investigating and evaluating the IT profession. Results from a survey of 305 IT workers indicate that professional identification is directly impacted by three factors: (1) the individual's need for professional identification; (2) the individual's perceived similarity to others in the IT profession; and (3) the individual's perceptions of the IT profession, signifying the importance of internalization to identification. Professional identification is also indirectly impacted by the public's perception of the IT profession.  相似文献   
66.
Subspace sums for extracting non-random data from massive noise   总被引:1,自引:1,他引:0  
An algorithm is introduced that distinguishes relevant data points from randomly distributed noise. The algorithm is related to subspace clustering based on axis-parallel projections, but considers membership in any projected cluster of a given side length, as opposed to a particular cluster. An aggregate measure is introduced that is based on the total number of points that are close to the given point in all possible 2 d projections of a d-dimensional hypercube. No explicit summation over subspaces is required for evaluating this measure. Attribute values are normalized based on rank order to avoid making assumptions on the distribution of random data. Effectiveness of the algorithm is demonstrated through comparison with conventional outlier detection on a real microarray data set as well as on time series subsequence data.
Anne M. DentonEmail:
  相似文献   
67.
Modern interaction techniques like non-intrusive gestures provide means for interacting with distant displays and smart objects without touching them. We were interested in the effects of feedback modality (auditory, haptic or visual) and its combined effect with input modality on user performance and experience in such interactions. Therefore, we conducted two exploratory experiments where numbers were entered, either by gaze or hand, using gestures composed of four stroke elements (up, down, left and right). In Experiment 1, a simple feedback was given on each stroke during the motor action of gesturing: an audible click, a haptic tap or a visual flash. In Experiment 2, a semantic feedback was given on the final gesture: the executed number was spoken, coded by haptic taps or shown as text. With simultaneous simple feedback in Experiment 1, performance with hand input was slower but more accurate than with gaze input. With semantic feedback in Experiment 2, however, hand input was only slower. Effects of feedback modality were of minor importance; nevertheless, semantic haptic feedback in Experiment 2 showed to be useless at least without extensive training. Error patterns differed between both input modes, but again not dependent on feedback modality. Taken together, the results show that in designing gestural systems, choosing a feedback modality can be given a low priority; it can be chosen according to the task, context and user preferences.  相似文献   
68.
One has a large computational workload that is “divisible” (its constituent tasks’ granularity can be adjusted arbitrarily) and one has access to p remote computers that can assist in computing the workload. How can one best utilize the computers? Two features complicate this question. First, the remote computers may differ from one another in speed. Second, each remote computer is subject to interruptions of known likelihood that kill all work in progress on it. One wishes to orchestrate sharing the workload with the remote computers in a way that maximizes the expected amount of work completed. We deal with three versions of this problem. The simplest version ignores communication costs but allows computers to differ in speed (a heterogeneous set of computers). The other two versions account for communication costs, first with identical remote computers (a homogeneous set of computers), and then with computers that may differ in speed. We provide exact expressions for the optimal work expectation for all three versions of the problem - via explicit closed-form expressions for the first two versions, and via a recurrence that computes this optimal value for the last, most general version.  相似文献   
69.
This paper presents a symbolic formalism for modeling and retrieving video data via the moving objects contained in the video images. The model integrates the representations of individual moving objects in a scene with the time-varying relationships between them by incorporating both the notions of object tracks and temporal sequences of PIRs (projection interval relationships). The model is supported by a set of operations which form the basis of a moving object algebra. This algebra allows one to retrieve scenes and information from scenes by specifying both spatial and temporal properties of the objects involved. It also provides operations to create new scenes from existing ones. A prototype implementation is described which allows queries to be specified either via an animation sketch or using the moving object algebra.  相似文献   
70.
The main goal of this paper is to show how relatively minor modifications of well-known algorithms (in particular, back propagation) can dramatically increase the performance of an artificial neural network (ANN) for time series prediction. We denote our proposed sets of modifications as the 'self-momentum', 'Freud' and 'Jung' rules. In our opinion, they provide an example of an alternative approach to the design of learning strategies for ANNs, one that focuses on basic mathematical conceptualization rather than on formalism and demonstration. The complexity of actual prediction problems makes it necessary to experiment with modelling possibilities whose inherent mathematical properties are often not well understood yet. The problem of time series prediction in stock markets is a case in point. It is well known that asset price dynamics in financial markets are difficult to trace, let alone to predict with an operationally interesting degree of accuracy. We therefore take financial prediction as a meaningful test bed for the validation of our techniques. We discuss in some detail both the theoretical underpinnings of the technique and our case study about financial prediction, finding encouraging evidence that supports the theoretical and operational viability of our new ANN specifications. Ours is clearly only a preliminary step. Further developments of ANN architectures with more and more sophisticated 'learning to learn' characteristics are now under study and test.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号