首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   4870篇
  免费   165篇
  国内免费   16篇
电工技术   49篇
综合类   3篇
化学工业   897篇
金属工艺   125篇
机械仪表   132篇
建筑科学   220篇
矿业工程   19篇
能源动力   153篇
轻工业   392篇
水利工程   53篇
石油天然气   15篇
无线电   327篇
一般工业技术   852篇
冶金工业   1155篇
原子能技术   53篇
自动化技术   606篇
  2023年   61篇
  2022年   87篇
  2021年   114篇
  2020年   119篇
  2019年   114篇
  2018年   121篇
  2017年   108篇
  2016年   121篇
  2015年   87篇
  2014年   117篇
  2013年   246篇
  2012年   181篇
  2011年   244篇
  2010年   178篇
  2009年   190篇
  2008年   238篇
  2007年   192篇
  2006年   145篇
  2005年   142篇
  2004年   104篇
  2003年   103篇
  2002年   67篇
  2001年   86篇
  2000年   86篇
  1999年   99篇
  1998年   222篇
  1997年   147篇
  1996年   135篇
  1995年   99篇
  1994年   78篇
  1993年   69篇
  1992年   37篇
  1991年   39篇
  1990年   50篇
  1989年   46篇
  1988年   36篇
  1987年   41篇
  1986年   50篇
  1985年   54篇
  1984年   37篇
  1983年   31篇
  1982年   44篇
  1981年   39篇
  1980年   50篇
  1979年   32篇
  1978年   33篇
  1977年   36篇
  1976年   52篇
  1975年   29篇
  1973年   28篇
排序方式: 共有5051条查询结果,搜索用时 15 毫秒
61.
When the Transformer proposed by Google in 2017, it was first used for machine translation tasks and achieved the state of the art at that time. Although the current neural machine translation model can generate high quality translation results, there are still mistranslations and omissions in the translation of key information of long sentences. On the other hand, the most important part in traditional translation tasks is the translation of key information. In the translation results, as long as the key information is translated accurately and completely, even if other parts of the results are translated incorrect, the final translation results’ quality can still be guaranteed. In order to solve the problem of mistranslation and missed translation effectively, and improve the accuracy and completeness of long sentence translation in machine translation, this paper proposes a key information fused neural machine translation model based on Transformer. The model proposed in this paper extracts the keywords of the source language text separately as the input of the encoder. After the same encoding as the source language text, it is fused with the output of the source language text encoded by the encoder, then the key information is processed and input into the decoder. With incorporating keyword information from the source language sentence, the model’s performance in the task of translating long sentences is very reliable. In order to verify the effectiveness of the method of fusion of key information proposed in this paper, a series of experiments were carried out on the verification set. The experimental results show that the Bilingual Evaluation Understudy (BLEU) score of the model proposed in this paper on the Workshop on Machine Translation (WMT) 2017 test dataset is higher than the BLEU score of Transformer proposed by Google on the WMT2017 test dataset. The experimental results show the advantages of the model proposed in this paper.  相似文献   
62.

Introduction

Subjective workload measures are usually administered in a visual-manual format, either electronically or by paper and pencil. However, vocal responses to spoken queries may sometimes be preferable, for example when experimental manipulations require continuous manual responding or when participants have certain sensory/motor impairments. In the present study, we evaluated the acceptability of the hands-free administration of two subjective workload questionnaires - the NASA Task Load Index (NASA-TLX) and the Multiple Resources Questionnaire (MRQ) - in a surgical training environment where manual responding is often constrained.

Method

Sixty-four undergraduates performed fifteen 90-s trials of laparoscopic training tasks (five replications of 3 tasks - cannulation, ring transfer, and rope manipulation). Half of the participants provided workload ratings using a traditional paper-and-pencil version of the NASA-TLX and MRQ; the remainder used a vocal (hands-free) version of the questionnaires. A follow-up experiment extended the evaluation of the hands-free version to actual medical students in a Minimally Invasive Surgery (MIS) training facility.

Results

The NASA-TLX was scored in 2 ways - (1) the traditional procedure using participant-specific weights to combine its 6 subscales, and (2) a simplified procedure - the NASA Raw Task Load Index (NASA-RTLX) - using the unweighted mean of the subscale scores. Comparison of the scores obtained from the hands-free and written administration conditions yielded coefficients of equivalence of r = 0.85 (NASA-TLX) and r = 0.81 (NASA-RTLX). Equivalence estimates for the individual subscales ranged from r = 0.78 (“mental demand”) to r = 0.31 (“effort”). Both administration formats and scoring methods were equally sensitive to task and repetition effects. For the MRQ, the coefficient of equivalence for the hands-free and written versions was r = 0.96 when tested on undergraduates. However, the sensitivity of the hands-free MRQ to task demands (ηpartial2 = 0.138) was substantially less than that for the written version (ηpartial2 = 0.252). This potential shortcoming of the hands-free MRQ did not seem to generalize to medical students who showed robust task effects when using the hands-free MRQ (ηpartial2 = 0.396). A detailed analysis of the MRQ subscales also revealed differences that may be attributable to a “spillover” effect in which participants’ judgments about the demands of completing the questionnaires contaminated their judgments about the primary surgical training tasks.

Conclusion

Vocal versions of the NASA-TLX are acceptable alternatives to standard written formats when researchers wish to obtain global workload estimates. However, care should be used when interpreting the individual subscales if the object is to make comparisons between studies or conditions that use different administration modalities. For the MRQ, the vocal version was less sensitive to experimental manipulations than its written counterpart; however, when medical students rather than undergraduates used the vocal version, the instrument’s sensitivity increased well beyond that obtained with any other combination of administration modality and instrument in this study. Thus, the vocal version of the MRQ may be an acceptable workload assessment technique for selected populations, and it may even be a suitable substitute for the NASA-TLX.  相似文献   
63.
Quadratic optimization lies at the very heart of many structural pattern recognition and computer vision problems, such as graph matching, object recognition, image segmentation, etc., and it is therefore of crucial importance to devise algorithmic solutions that are both efficient and effective. As it turns out, a large class of quadratic optimization problems can be formulated in terms of so-called “standard quadratic programs” (StQPs), which ask for finding the extrema of a quadratic polynomial over the standard simplex. Computationally, the standard approach for attacking this class of problems is to use replicator dynamics, a well-known family of algorithms from evolutionary game theory inspired by Darwinian selection processes. Despite their effectiveness in finding good solutions in a variety of applications, however, replicator dynamics suffer from being computationally expensive, as they require a number of operations per step which grows quadratically with the dimensionality of the problem being solved. In order to avoid this drawback, in this paper we propose a new population game dynamics (InImDyn) which is motivated by the analogy with infection and immunization processes within a population of “players.” We prove that the evolution of our dynamics is governed by a quadratic Lyapunov function, representing the average population payoff, which strictly increases along non-constant trajectories and that local solutions of StQPs are asymptotically stable (i.e., attractive) points. Each step of InImDyn is shown to have a linear time/space complexity, thereby allowing us to use it as a more efficient alternative to standard approaches for solving StQPs and related optimization problems. Indeed, we demonstrate experimentally that InImDyn is orders of magnitude faster than, and as accurate as, replicator dynamics on various applications ranging from tree matching to image registration, matching and segmentation.  相似文献   
64.
This paper addresses the problem of autonomous navigation of a micro air vehicle (MAV) in GPS‐denied environments. We present experimental validation and analysis for our system that enables a quadrotor helicopter, equipped with a laser range finder sensor, to autonomously explore and map unstructured and unknown environments. The key challenge for enabling GPS‐denied flight of a MAV is that the system must be able to estimate its position and velocity by sensing unknown environmental structure with sufficient accuracy and low enough latency to stably control the vehicle. Our solution overcomes this challenge in the face of MAV payload limitations imposed on sensing, computational, and communication resources. We first analyze the requirements to achieve fully autonomous quadrotor helicopter flight in GPS‐denied areas, highlighting the differences between ground and air robots that make it difficult to use algorithms developed for ground robots. We report on experiments that validate our solutions to key challenges, namely a multilevel sensing and control hierarchy that incorporates a high‐speed laser scan‐matching algorithm, data fusion filter, high‐level simultaneous localization and mapping, and a goal‐directed exploration module. These experiments illustrate the quadrotor helicopter's ability to accurately and autonomously navigate in a number of large‐scale unknown environments, both indoors and in the urban canyon. The system was further validated in the field by our winning entry in the 2009 International Aerial Robotics Competition, which required the quadrotor to autonomously enter a hazardous unknown environment through a window, explore the indoor structure without GPS, and search for a visual target. © 2011 Wiley Periodicals, Inc.  相似文献   
65.
Combined analysis of multiple data sources has increasing application interest, in particular for distinguishing shared and source-specific aspects. We extend this rationale to the generative and non-parametric clustering setting by introducing a novel non-parametric hierarchical mixture model. The lower level of the model describes each source with a flexible non-parametric mixture, and the top level combines these to describe commonalities of the sources. The lower-level clusters arise from hierarchical Dirichlet Processes, inducing an infinite-dimensional contingency table between the sources. The commonalities between the sources are modeled by an infinite component model of the contingency table, interpretable as non-negative factorization of infinite matrices, or as a prior for infinite contingency tables. With Gaussian mixture components plugged in for continuous measurements, the model is applied to two views of genes, mRNA expression and abundance of the produced proteins, to expose groups of genes that are co-regulated in either or both of the views. We discover complex relationships between the marginals (that are multimodal in both marginals) that would remain undetected by simpler models. Cluster analysis of co-expression is a standard method of screening for co-regulation, and the two-view analysis extends the approach to distinguishing between pre- and post-translational regulation.  相似文献   
66.
We present an integral feedback controller that regulates the average copy number of an assembly in a system of stochastically interacting robots. The mathematical model for these robots is a tunable reaction network, which makes this approach applicable to a large class of other systems, including ones that exhibit stochastic self-assembly at various length scales. We prove that this controller works for a range of setpoints and how to compute this range both analytically and experimentally. Finally, we demonstrate these ideas on a physical testbed.  相似文献   
67.
This paper describes the architecture and implementation of a distributed autonomous gardening system with applications in urban/indoor precision agriculture. The garden is a mesh network of robots and plants. The gardening robots are mobile manipulators with an eye-in-hand camera. They are capable of locating plants in the garden, watering them, and locating and grasping fruit. The plants are potted cherry tomatoes enhanced with sensors and computation to monitor their well-being (e.g. soil humidity, state of fruits) and with networking to communicate servicing requests to the robots. By embedding sensing, computation, and communication into the pots, task allocation in the system is de-centrally coordinated, which makes the system scalable and robust against the failure of a centralized agent. We describe the architecture of this system and present experimental results for navigation, object recognition, and manipulation as well as challenges that lie ahead toward autonomous precision agriculture with multi-robot teams.  相似文献   
68.
69.
Well designed domain specific languages have three important benefits: (1) the easy expression of problems, (2) the application of domain specific optimizations (including parallelization), and (3) dramatic improvements in productivity for their users. In this paper we describe a compiler and parallel runtime system for modeling the complex kinetics of rubber vulcanization and olefin polymerization that achieves all of these goals. The compiler allows the development of a system of ordinary differential equations describing a complex vulcanization reaction or single-site olefin polymerization reaction—a task that used to require months—to be done in hours. A specialized common sub-expression elimination and other algebraic optimizations sufficiently simplify the complex machine generated code to allow it to be compiled—eliminating all but 8.0% of the operations in our largest program and enabling over 60 times faster execution on our largest benchmark codes. The parallel runtime and dynamic load balancing scheme enables fast simulations of the model.  相似文献   
70.
This article sets out to identify the typical risky situations experienced by novice motorcyclists in the real world just after licensing. The procedure consists of a follow-up of six novices during their first two months of riding with their own motorbike instrumented with cameras. The novices completed logbooks on a daily basis in order to identify the risky situations they encountered, and were given face-to-face interviews to identify the context and their shortcomings during the reported events. Data show a large number of road configurations considered as risky by the riders (248 occurrences), especially during the first two weeks. The results revealed that a lack of hazard perception skills contributed to the majority of these incidents. These situations were grouped together to form clusters of typical incident scenarios on the basis of their similarities. The most frequent scenario corresponds to a lane change in dense traffic (15% of all incidents). The discussion shows how this has enhanced our understanding of novice riders’ behaviour and how the findings can improve training and licensing. Lastly, the main methodological limitations of the study and some guidelines for improving future naturalistic riding studies are presented.

Practitioner Summary:

This article aims to identify the risky situations of novice motorcyclists in real roads. Two hundred forty-eight events were recorded and 13 incident scenarios identified. Results revealed that a lack of hazard perception contributed to the majority of these events. The most frequent scenario corresponds to a lane change in dense traffic.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号