首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   5673篇
  免费   492篇
  国内免费   10篇
电工技术   53篇
综合类   12篇
化学工业   1848篇
金属工艺   84篇
机械仪表   118篇
建筑科学   260篇
矿业工程   19篇
能源动力   140篇
轻工业   697篇
水利工程   38篇
石油天然气   15篇
无线电   543篇
一般工业技术   940篇
冶金工业   640篇
原子能技术   49篇
自动化技术   719篇
  2023年   44篇
  2022年   154篇
  2021年   168篇
  2020年   113篇
  2019年   144篇
  2018年   161篇
  2017年   181篇
  2016年   196篇
  2015年   186篇
  2014年   241篇
  2013年   380篇
  2012年   279篇
  2011年   322篇
  2010年   283篇
  2009年   260篇
  2008年   283篇
  2007年   205篇
  2006年   220篇
  2005年   173篇
  2004年   162篇
  2003年   146篇
  2002年   139篇
  2001年   98篇
  2000年   113篇
  1999年   84篇
  1998年   111篇
  1997年   90篇
  1996年   94篇
  1995年   70篇
  1994年   80篇
  1993年   65篇
  1992年   57篇
  1991年   37篇
  1990年   57篇
  1989年   55篇
  1988年   32篇
  1987年   42篇
  1986年   35篇
  1985年   48篇
  1984年   48篇
  1983年   41篇
  1982年   39篇
  1981年   38篇
  1980年   34篇
  1979年   46篇
  1977年   42篇
  1976年   31篇
  1975年   23篇
  1974年   25篇
  1973年   24篇
排序方式: 共有6175条查询结果,搜索用时 15 毫秒
61.
We present here a new randomized algorithm for repairing the topology of objects represented by 3D binary digital images. By “repairing the topology”, we mean a systematic way of modifying a given binary image in order to produce a similar binary image which is guaranteed to be well-composed. A 3D binary digital image is said to be well-composed if, and only if, the square faces shared by background and foreground voxels form a 2D manifold. Well-composed images enjoy some special properties which can make such images very desirable in practical applications. For instance, well-known algorithms for extracting surfaces from and thinning binary images can be simplified and optimized for speed if the input image is assumed to be well-composed. Furthermore, some algorithms for computing surface curvature and extracting adaptive triangulated surfaces, directly from the binary data, can only be applied to well-composed images. Finally, we introduce an extension of the aforementioned algorithm to repairing 3D digital multivalued images. Such an algorithm finds application in repairing segmented images resulting from multi-object segmentations of other 3D digital multivalued images.
James GeeEmail:
  相似文献   
62.
When European laboratories decided to develop a digital sound broadcasting system (DSB), they specified three main conditions to fulfil:
  • quality improvement up to the level of ‘CD’ sound, even in difficult reception conditions (mobile vehicles, etc)
  • additional significant digital data transmissions in order to transform sound broadcasting into a really new service
  • the possibility of a common system for satellite and terrestrial transmissions.
It is on these bases that the European project ‘Eureka 147’ defined the system called DAB (digital audio broadcasting). In 1992, the ITU Conference WARC 92 allocated 40 MHz to DSB in the L-band in the configuration of complementary terrestrial and satellite networks; nevertheless, the present state of technical possibilities makes such mixed networks almost unfeasible and the lack of available spectrum in VHF bands led a significant number of countries to envisage L-band for T-DAB. The situation could turn to a competition between terrestrial and satellite networks, especially because the bandwidth in L-band is not that large! France belongs to the countries facing this problem. L-band alone is intended to be used by T-DAB, and broadcasters taking part in the work of ‘Club DAB’ estimated that 20 MHz would be a minimum to ensure the success of T-DAB introduction. It is half of the DSB band. Splitting this band into two parts has already been decided by CEPT, but in the proportion of 1/3 for T-DAB. This organization intends to arrange a European planning meeting for T-DAB in July 1995, and, shortly after, several countries are ready to start the implementation of the terrestrial networks. At the same time, international broadcasters wonder whether satellite transmission could present an alternative to HF.  相似文献   
63.

The classification task usually works with flat and batch learners, assuming problems as stationary and without relations between class labels. Nevertheless, several real-world problems do not assume these premises, i.e., data have labels organized hierarchically and are made available in streaming fashion, meaning that their behavior can drift over time. Existing studies on hierarchical classification do not consider data streams as input of their process, and thus, data is assumed as stationary and handled through batch learners. The same can be said about works on streaming data, as the hierarchical classification is overlooked. Studies concerning each area individually are promising, yet, do not tackle their intersection. This study analyzes the main characteristics of the state-of-the-art works on hierarchical classification for streaming data concerning five aspects: (i) problems tackled, (ii) datasets, (iii) algorithms, (iv) evaluation metrics, and (v) research gaps in the area. We performed a systematic literature review of primary studies and retrieved 3,722 papers, of which 42 were identified as relevant and used to answer the aforementioned research questions. We found that the problems handled by hierarchical classification of data streams include mainly classification of images, human activities, texts, and audio; the datasets are mostly created or synthetic data; the algorithms and evaluation metrics are well-known techniques or based on those; and research gaps are related to dynamic context, data complexity, and computational resources constraints. We also provide implications for future research and experiments to consider common characteristics shared amongst hierarchical classification and data stream classification.

  相似文献   
64.
Gold‐coated nanodisk arrays of nearly micron periodicity are reported that have high figure of merit (FOM) and sensitivity necessary for plasmonic refractometric sensing, with the added benefit of suitability for surface‐enhanced Raman scattering (SERS), large‐scale microfabrication using standard photolithographic techniques and a simple instrumental setup. Gold nanodisk arrays are covered with a gold layer to excite the Bragg modes (BM), which are the propagative surface plasmons localized by the diffraction from the disk array. This generates surface‐guided modes, localized as standing waves, leading to highly confined fields confirmed by a mapping of the SERS intensity and numerical simulations with 3D finite element method. The optimal gold‐coated nanodisk arrays are applied for refractometric sensing in transmission spectroscopy with better performance than nanohole arrays and they are integrated to a 96‐well plate reader for detection of IgY proteins in the nanometer range in PBS. The potential for sensing in biofluids is assessed with IgG detection in 1:1 diluted urine. The structure exhibits a high FOM of up to 46, exceeding the FOM of structures supporting surface plasmon polaritons and comparable to more complex nanostructures, demonstrating that subwavelength features are not necessary for high‐performance plasmonic sensing.  相似文献   
65.
Deterministic lateral displacement (DLD) devices enable to separate nanometer to micrometer‐sized particles around a cutoff diameter, during their transport through a microfluidic channel with slanted rows of pillars. In order to design appropriate DLD geometries for specific separation sizes, robust models are required to anticipate the value of the cutoff diameter. So far, the proposed models result in a single cutoff diameter for a given DLD geometry. This paper shows that the cutoff diameter actually varies along the DLD channel, especially in narrow pillar arrays. Experimental and numerical results reveal that the variation of the cutoff diameter is induced by boundary effects at the channel side walls, called the wall effect. The wall effect generates unexpected particle trajectories that may compromise the separation efficiency. In order to anticipate the wall effect when designing DLD devices, a predictive model is proposed in this work and has been validated experimentally. In addition to the usual geometrical parameters, a new parameter, the number of pillars in the channel cross dimension, is considered in this model to investigate its influence on the particle trajectories.  相似文献   
66.
67.
68.

Introduction

Subjective workload measures are usually administered in a visual-manual format, either electronically or by paper and pencil. However, vocal responses to spoken queries may sometimes be preferable, for example when experimental manipulations require continuous manual responding or when participants have certain sensory/motor impairments. In the present study, we evaluated the acceptability of the hands-free administration of two subjective workload questionnaires - the NASA Task Load Index (NASA-TLX) and the Multiple Resources Questionnaire (MRQ) - in a surgical training environment where manual responding is often constrained.

Method

Sixty-four undergraduates performed fifteen 90-s trials of laparoscopic training tasks (five replications of 3 tasks - cannulation, ring transfer, and rope manipulation). Half of the participants provided workload ratings using a traditional paper-and-pencil version of the NASA-TLX and MRQ; the remainder used a vocal (hands-free) version of the questionnaires. A follow-up experiment extended the evaluation of the hands-free version to actual medical students in a Minimally Invasive Surgery (MIS) training facility.

Results

The NASA-TLX was scored in 2 ways - (1) the traditional procedure using participant-specific weights to combine its 6 subscales, and (2) a simplified procedure - the NASA Raw Task Load Index (NASA-RTLX) - using the unweighted mean of the subscale scores. Comparison of the scores obtained from the hands-free and written administration conditions yielded coefficients of equivalence of r = 0.85 (NASA-TLX) and r = 0.81 (NASA-RTLX). Equivalence estimates for the individual subscales ranged from r = 0.78 (“mental demand”) to r = 0.31 (“effort”). Both administration formats and scoring methods were equally sensitive to task and repetition effects. For the MRQ, the coefficient of equivalence for the hands-free and written versions was r = 0.96 when tested on undergraduates. However, the sensitivity of the hands-free MRQ to task demands (ηpartial2 = 0.138) was substantially less than that for the written version (ηpartial2 = 0.252). This potential shortcoming of the hands-free MRQ did not seem to generalize to medical students who showed robust task effects when using the hands-free MRQ (ηpartial2 = 0.396). A detailed analysis of the MRQ subscales also revealed differences that may be attributable to a “spillover” effect in which participants’ judgments about the demands of completing the questionnaires contaminated their judgments about the primary surgical training tasks.

Conclusion

Vocal versions of the NASA-TLX are acceptable alternatives to standard written formats when researchers wish to obtain global workload estimates. However, care should be used when interpreting the individual subscales if the object is to make comparisons between studies or conditions that use different administration modalities. For the MRQ, the vocal version was less sensitive to experimental manipulations than its written counterpart; however, when medical students rather than undergraduates used the vocal version, the instrument’s sensitivity increased well beyond that obtained with any other combination of administration modality and instrument in this study. Thus, the vocal version of the MRQ may be an acceptable workload assessment technique for selected populations, and it may even be a suitable substitute for the NASA-TLX.  相似文献   
69.
We generalize the Kleene theorem to the case where nonassociative products are used. For this purpose, we apply rotations restricted to the root of binary trees.  相似文献   
70.
IntroductionAll hospitals in the province of Styria (Austria) are well equipped with sophisticated Information Technology, which provides all-encompassing on-screen patient information. Previous research made on the theoretical properties, advantages and disadvantages, of reading from paper vs. reading from a screen has resulted in the assumption that reading from a screen is slower, less accurate and more tiring. However, recent flat screen technology, especially on the basis of LCD, is of such high quality that obviously this assumption should now be challenged. As the electronic storage and presentation of information has many advantages in addition to a faster transfer and processing of the information, the usage of electronic screens in clinics should outperform the traditional hardcopy in both execution and preference ratings.This study took part in a County hospital Styria, Austria, with 111 medical professionals, working in a real-life setting. They were each asked to read original and authentic diagnosis reports, a gynecological report and an internal medical document, on both screen and paper in a randomly assigned order. Reading comprehension was measured by the Chunked Reading Test, and speed and accuracy of reading performance was quantified. In order to get a full understanding of the clinicians' preferences, subjective ratings were also collected.ResultsWilcoxon Signed Rank Tests showed no significant differences on reading performance between paper vs. screen. However, medical professionals showed a significant (90%) preference for reading from paper. Despite the high quality and the benefits of electronic media, paper still has some qualities which cannot provided electronically do date.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号