首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1790篇
  免费   60篇
  国内免费   4篇
电工技术   31篇
综合类   2篇
化学工业   260篇
金属工艺   25篇
机械仪表   62篇
建筑科学   93篇
矿业工程   6篇
能源动力   77篇
轻工业   117篇
水利工程   25篇
石油天然气   12篇
无线电   144篇
一般工业技术   312篇
冶金工业   394篇
原子能技术   4篇
自动化技术   290篇
  2023年   9篇
  2021年   21篇
  2020年   16篇
  2019年   32篇
  2018年   29篇
  2017年   30篇
  2016年   40篇
  2015年   27篇
  2014年   43篇
  2013年   135篇
  2012年   75篇
  2011年   100篇
  2010年   100篇
  2009年   80篇
  2008年   86篇
  2007年   91篇
  2006年   93篇
  2005年   86篇
  2004年   52篇
  2003年   51篇
  2002年   41篇
  2001年   26篇
  2000年   22篇
  1999年   30篇
  1998年   41篇
  1997年   40篇
  1996年   27篇
  1995年   24篇
  1994年   20篇
  1993年   25篇
  1992年   24篇
  1991年   9篇
  1990年   21篇
  1989年   20篇
  1988年   20篇
  1987年   15篇
  1986年   16篇
  1985年   23篇
  1984年   25篇
  1983年   21篇
  1982年   9篇
  1981年   15篇
  1980年   10篇
  1979年   12篇
  1978年   8篇
  1977年   11篇
  1976年   11篇
  1975年   16篇
  1970年   8篇
  1969年   8篇
排序方式: 共有1854条查询结果,搜索用时 15 毫秒
41.
Shiftwork involving early morning starts and night work can affect both sleep and fatigue. This study aimed to assess the impact of different rostering schedules at an Australian mine site on sleep and subjective sleep quality. Participants worked one of four rosters;
4 × 4 (n = 14) 4D4O4N4O
7 × 4 (n = 10) 7D4O7N40
10 × 5 (n = 17) 5D5N50
14 × 7 (n = 12) 7D7N70
Sleep (wrist actigraphy and sleep diaries) was monitored for a full roster cycle including days off. Total sleep time (TST) was longer on days off (7.0 ± 1.9) compared to sleep when on day (6.0 ± 1.0) and nightshifts (6.2 ± 1.6). Despite an increase in TST on days off, this may be insufficient to recover from the severe sleep restriction occurring during work times. Restricted sleep and quick shift-change periods may lead to long-term sleep loss and associated fatigue.  相似文献   
42.
With the development of the condition-based maintenance techniques and the consequent requirement for good machine learning methods, new challenges arise in unsupervised learning. In the real-world situations, due to the relevant features that could exhibit the real machine condition are often unknown as priori, condition monitoring systems based on unimportant features, e.g. noise, might suffer high false-alarm rates, especially when the characteristics of failures are costly or difficult to learn. Therefore, it is important to select the most representative features for unsupervised learning in fault diagnostics. In this paper, a hybrid feature selection scheme (HFS) for unsupervised learning is proposed to improve the robustness and the accuracy of fault diagnostics. It provides a general framework of the feature selection based on significance evaluation and similarity measurement with respect to the multiple clustering solutions. The effectiveness of the proposed HFS method is demonstrated by a bearing fault diagnostics application and comparison with other features selection methods.  相似文献   
43.
Recently there has been an increased demand for imaging systems in support of high-speed digital printing. The required increase in performance in support of such systems can be accomplished through an effective parallel execution of image processing applications in a distributed cluster computing environment. The output of the system must be presented to a raster based display at regular intervals, effectively establishing a hard deadline for the production of each image. Failure to complete a rasterization task before its deadline will result in an interruption of service that is unacceptable. The goal of this research was to derive a metric for measuring robustness in this environment and to design a resource allocation heuristic capable of completing each rasterization task before its assigned deadline, thus, preventing any service interruptions. We present a mathematical model of such a cluster based raster imaging system, derive a robustness metric for evaluating heuristics in this environment, and demonstrate using the metric to make resource allocation decisions. The heuristics are evaluated within a simulation of the studied raster imaging system. We clearly demonstrate the effectiveness of the heuristics by comparing their results with the results of a resource allocation heuristic commonly used in this type of system.  相似文献   
44.
The advent of internet has led to a significant growth in the amount of information available, resulting in information overload, i.e. individuals have too much information to make a decision. To resolve this problem, collaborative tagging systems form a categorization called folksonomy in order to organize web resources. A folksonomy aggregates the results of personal free tagging of information and objects to form a categorization structure that applies utilizes the collective intelligence of crowds. Folksonomy is more appropriate for organizing huge amounts of information on the Web than traditional taxonomies established by expert cataloguers. However, the attributes of collaborative tagging systems and their folksonomy make them impractical for organizing resources in personal environments.This work designs a desktop collaborative tagging (DCT) system that enables collaborative workers to tag their documents. This work proposes an application in patent analysis based on the DCT system. Folksonomy in DCT is built by aggregating personal tagging results, and is represented by a concept space. Concept spaces provide synonym control, tag recommendation and relevant search. Additionally, to protect privacy of authors and to decrease the transmission cost, relations between tagged and untagged documents are constructed by extracting document’s features rather than adopting the full text.Experimental results reveal that the adoption rate of recommended tags for new documents increases by 10% after users have tagged five or six documents. Furthermore, DCT can recommend tags with higher adoption rates when given new documents with similar topics to previously tagged ones. The relevant search in DCT is observed to be superior to keyword search when adopting frequently used tags as queries. The average precision, recall, and F-measure of DCT are 12.12%, 23.08%, and 26.92% higher than those of keyword searching.DCT allows a multi-faceted categorization of resources for collaborative workers and recommends tags for categorizing resources to simplify categorization easier. Additionally, DCT system provides relevance searching, which is more effective than traditional keyword searching for searching personal resources.  相似文献   
45.
Coastal water mapping from remote-sensing hyperspectral data suffers from poor retrieval performance when the targeted parameters have little effect on subsurface reflectance, especially due to the ill-posed nature of the inversion problem. For example, depth cannot accurately be retrieved for deep water, where the bottom influence is negligible. Similarly, for very shallow water it is difficult to estimate the water quality because the subsurface reflectance is affected more by the bottom than by optically active water components.

Most methods based on radiative transfer model inversion do not consider the distribution of targeted parameters within the inversion process, thereby implicitly assuming that any parameter value in the estimation range has the same probability. In order to improve the estimation accuracy for the above limiting cases, we propose to regularize the objective functions of two estimation methods (maximum likelihood or ML, and hyperspectral optimization process exemplar, or HOPE) by introducing local prior knowledge on the parameters of interest. To do so, loss functions are introduced into ML and HOPE objective functions in order to reduce the range of parameter estimation. These loss functions can be characterized either by using prior or expert knowledge, or by inferring this knowledge from the data (thus avoiding the use of additional information).

This approach was tested both on simulated and real hyperspectral remote-sensing data. We show that the regularized objective functions are more peaked than their non-regularized counterparts when the parameter of interest has little effect on subsurface reflectance. As a result, the estimation accuracy of regularized methods is higher for these depth ranges. In particular, when evaluated on real data, these methods were able to estimate depths up to 20 m, while corresponding non-regularized methods were accurate only up to 13 m on average for the same data.

This approach thus provides a solution to deal with such difficult estimation conditions. Furthermore, because no specific framework is needed, it can be extended to any estimation method that is based on iterative optimization.  相似文献   
46.
In support of a generalization of systems theory, this paper introduces a new approach in modeling complex distributed systems. It offers an analytic framework for describing the behavior of interactive cyberphysical systems (CPSs), which are networked stationary or mobile information systems responsible for the real-time governance of physical processes whose behaviors unfold in cyberspace. The framework is predicated on a cyberspace-time reference model comprising three spatial dimensions plus time. The spatial domains include geospatial, infospatial, and sociospatial references, the latter describing relationships among sovereign enterprises (rational agents) that choose voluntarily to organize and interoperate for individual and mutual benefit through geospatial (physical) and infospatial (logical) transactions. Of particular relevance to CPSs are notions of timeliness and value, particularly as they relate to the real-time governance of physical processes and engagements with other cooperating CPS. Our overarching interest, as with celestial mechanics, is in the formation and evolution of clusters of cyberspatial objects and the federated systems they form.  相似文献   
47.
A sparser but more efficient connection rule (called a bond-cutoff method) for a simplified alpha-carbon coarse-grained elastic network model is presented. One of conventional connection rules for elastic network models is the distance-cutoff method, where virtual springs connect an alpha-carbon with all neighbor alpha-carbons within predefined distance-cutoff value. However, though the maximum interaction distance between alpha-carbons is reported as 7 angstroms, this cutoff value can make the elastic network unstable in many cases of protein structures. Thus, a larger cutoff value (>11 angstroms) is often used to establish a stable elastic network model in previous researches. To overcome this problem, a connection rule for backbone model is proposed, which satisfies the minimum condition to stabilize an elastic network. Based on the backbone connections, each type of chemical interactions is considered and added to the elastic network model: disulfide bonds, hydrogen bonds, and salt-bridges. In addition, the van der Waals forces between alpha-carbons are modeled by using the distance-cutoff method. With the proposed connection rule, one can make an elastic network model with less than 7 angstroms distance cutoff, which can reveal protein flexibility more sharply. Moreover, the normal modes from the new elastic network model can reflect conformational changes of a given protein better than ones by the distance-cutoff method. This method can save the computational cost when calculating normal modes of a given protein structure, because it can reduce the total number of connections. As a validation, six example proteins are tested. Computational times and the overlap values between the conformational change and infinitesimal motion calculated by normal mode analysis are presented. Those animations are also available at UMass Morph Server (http://biomechanics.ecs.umass.edu/umms.html).  相似文献   
48.
Multimedia systems design generally requires a collaborative effort from a group of designers with a variety of backgrounds and tasks, such as content experts, instructional designers, media specialists, users, and so forth. However, currently available design tools on the market are mainly designed for a single user. Tools intended to support a collaborative design process should coordinate independent activities of individual designers.This research investigated support for work groups engaged in designing multimedia systems. Specifically, it discussed a new collaborative design environment, called the KMS (Knowledge Management System)-based design environment, in which multimedia designers could share their design knowledge freely. Through two experimental groups, the research investigated impacts of the KMS-based design environment on their collaborative design activities (knowledge creating, knowledge securing, knowledge distributing, and knowledge retrieving activities). The research findings showed that the KMS-based design environment was a promising environment for collaborative multimedia systems design. More specifically, the research findings indicated that the KMS-based design environment supported creating, securing, and retrieving knowledge, but it did not support distributing knowledge. In addition, the research found that the social interactions between group members played important roles in the success of the collaborative multimedia systems design and that the KMS-based design environment did not support the socialization of group members. Furthermore, the research found that the inability of the KMS-based design environment to support the socialization was linked to its low performance level in supporting the knowledge distributing activity. The research explored the desired features of a collaborative support tool for multimedia systems design.  相似文献   
49.
The problem of finding efficient workload distribution techniques is becoming increasingly important today for heterogeneous distributed systems where the availability of compute nodes may change spontaneously over time. Resource-allocation policies designed for such systems should maximize the performance and, at the same time, be robust against failure and recovery of compute nodes. Such a policy, based on the concepts of the Derman–Lieberman–Ross theorem, is proposed in this work, and is applied to a simulated model of a dedicated system composed of a set of heterogeneous image processing servers. Assuming that each image results in a “reward” if its processing is completed before a certain deadline, the goal for the resource allocation policy is to maximize the expected cumulative reward. An extensive analysis was done to study the performance of the proposed policy and compare it with the performance of some existing policies adapted to this environment. Our experiments conducted for various types of task-machine heterogeneity illustrate the potential of our method for solving resource allocation problems in a broad spectrum of distributed systems that experience high failure rates.  相似文献   
50.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号