首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   5760篇
  免费   199篇
  国内免费   8篇
电工技术   51篇
综合类   5篇
化学工业   1284篇
金属工艺   86篇
机械仪表   103篇
建筑科学   259篇
矿业工程   22篇
能源动力   190篇
轻工业   495篇
水利工程   45篇
石油天然气   44篇
武器工业   2篇
无线电   475篇
一般工业技术   990篇
冶金工业   1100篇
原子能技术   27篇
自动化技术   789篇
  2022年   75篇
  2021年   101篇
  2020年   67篇
  2019年   84篇
  2018年   101篇
  2017年   104篇
  2016年   97篇
  2015年   97篇
  2014年   142篇
  2013年   410篇
  2012年   251篇
  2011年   371篇
  2010年   221篇
  2009年   258篇
  2008年   255篇
  2007年   274篇
  2006年   231篇
  2005年   199篇
  2004年   150篇
  2003年   156篇
  2002年   145篇
  2001年   111篇
  2000年   98篇
  1999年   109篇
  1998年   144篇
  1997年   138篇
  1996年   93篇
  1995年   115篇
  1994年   80篇
  1993年   70篇
  1992年   60篇
  1991年   57篇
  1990年   74篇
  1989年   54篇
  1988年   53篇
  1987年   46篇
  1986年   44篇
  1985年   67篇
  1984年   69篇
  1983年   51篇
  1982年   50篇
  1981年   47篇
  1980年   49篇
  1979年   48篇
  1978年   59篇
  1977年   54篇
  1976年   60篇
  1975年   45篇
  1974年   42篇
  1973年   43篇
排序方式: 共有5967条查询结果,搜索用时 0 毫秒
111.
Structured text is a general concept that is implicit in a variety of approaches to handling information. Syntactically, an item of structured text is a number of grammatically simple phrases together with a semantic label for each phrase. Items of structured text may be nested within larger items of structured text. The semantic labels in a structured text are meant to parameterize a stereotypical situation, and so a particular item of structured text is an instance of that stereotypical situation. Much information is potentially available as structured text including tagged text in XML, text in relational and object-oriented databases, and the output from information extraction systems in the form of instantiated templates. In this paper, we formalize the concept of structured text, and then focus on how we can identify inconsistency in the logical representation of items of structured text. We then present a new framework for merging logical theories that can be employed to merge inconsistent items of structured text. To illustrate, we consider the problem of merging reports such as weather reports.  相似文献   
112.
The possibility of developing a simple, inexpensive and specific personal passive “real-time” air sampler incorporating a biosensor for formic acid was investigated. The sensor is based on the enzymatic reaction between formic acid and formate dehydrogenase (FDH) with nicotinamide adenine dinucleotide (NAD+) as a co-factor and Meldola's blue as mediator. An effective way to immobilise the enzyme, co-factor and Meldola's blue on screen-printed, disposable, electrodes was found to be in a mixture of glycerol and phosphate buffer covered with a gas-permeable membrane. Steady-state current was reached after 4–15 min and the limit of detection was calculated to be below 1 mg/m3. However, the response decreased by 50% after storage at −15°C for 1 day.  相似文献   
113.
TTCN-3 is an abstract language for specification of Abstract Test Suites. Coding of TTCN-3 values into physically transmittable messages and decoding of bitstrings into their TTCN-3 representation has been removed from the language itself and relayed to external and specialized components, called CoDec. CoDec development, either implicitly or explicitly, is a must in any TTCN-3 testing activity. Field experience showed that there is a high cost associated with CoDec development and maintenance. To achieve adequate software engineering practices, a set of types, tools and definitions were developed. This paper unveils gray areas in TTCN-3 architecture and presents a methodological approach to minimize the complexity of CoDec development. Even though the initial field of application is IPv6 testing, the main tool introduced—the CoDec Generator—is a valuable tool in any testing application domain. It is designed to lower the CoDec maintenance costs in all test case lifecycle stages, from development to maintenance. This work has been partly supported by the IST Go4IT European project: .  相似文献   
114.
Sheep shearers are known to work in sustained flexed postures and have a high prevalence of low back pain (LBP). As sustained posture and spinal movement asymmetry under substantial loads are known risk factors for back injury our aim was to describe the 3D spinal movement of shearers while working. We hypothesised that thoraco-lumbar and lumbo-sacral movement would be tri-axial, asymmetric, and task specific. Sufficient retro-reflective markers were placed on the trunk of 12 shearers to define thoraco-lumbar and lumbo-sacral 3D motion during three tasks. Thoraco-lumbar movement consistently involved flexion, left lateral flexion, and right rotation. Lumbo-sacral movement consistently involved right lateral flexion in flexion with minimal rotation. Shearers therefore work in sustained spinal flexion where concurrent, asymmetric spinal movements into both lateral flexion and rotation occur. These asymmetric movements combined with repetitive loading may be risk factors leading to the high incidence of LBP in this occupational group.  相似文献   
115.
Most variational active contour models are designed to find local minima of data-dependent energy functionals with the hope that reasonable initial placement of the active contour will drive it toward a "desirable" local minimum as opposed to an undesirable configuration due to noise or complex image structure. As such, there has been much research into the design of complex region-based energy functionals that are less likely to yield undesirable local minima when compared to simpler edge-based energy functionals whose sensitivity to noise and texture is significantly worse. Unfortunately, most of these more "robust" region-based energy functionals are applicable to a much narrower class of imagery compared to typical edge-based energies due to stronger global assumptions about the underlying image data. Devising new implementation algorithms for active contours that attempt to capture more global minimizers of already proposed image-based energies would allow us to choose an energy that makes sense for a particular class of energy without concern over its sensitivity to local minima. Such implementations have been proposed for capturing global minima. However, sometimes the completely-global minimum is just as undesirable as a minimum that is too local. In this paper, we propose a novel, fast, and flexible dual front implementation of active contours, motivated by minimal path techniques and utilizing fast sweeping algorithms, which is easily manipulated to yield minima with variable "degrees" of localness and globalness. By simply adjusting the size of active regions, the ability to gracefully move from capturing minima that are more local (according to the initial placement of the active contour/surface) to minima that are more global allows this model to more easily obtain "desirable" minimizers (which often are neither the most local nor the most global). Experiments on various 2D and 3D images and comparisons with some active contour models and region-growing methods are also given to illustrate the properties of this model and its performance in a variety of segmentation applications.  相似文献   
116.
This paper addresses the problem of calibrating camera parameters using variational methods. One problem addressed is the severe lens distortion in low-cost cameras. For many computer vision algorithms aiming at reconstructing reliable representations of 3D scenes, the camera distortion effects will lead to inaccurate 3D reconstructions and geometrical measurements if not accounted for. A second problem is the color calibration problem caused by variations in camera responses that result in different color measurements and affects the algorithms that depend on these measurements. We also address the extrinsic camera calibration that estimates relative poses and orientations of multiple cameras in the system and the intrinsic camera calibration that estimates focal lengths and the skew parameters of the cameras. To address these calibration problems, we present multiview stereo techniques based on variational methods that utilize partial and ordinary differential equations. Our approach can also be considered as a coordinated refinement of camera calibration parameters. To reduce computational complexity of such algorithms, we utilize prior knowledge on the calibration object, making a piecewise smooth surface assumption, and evolve the pose, orientation, and scale parameters of such a 3D model object without requiring a 2D feature extraction from camera views. We derive the evolution equations for the distortion coefficients, the color calibration parameters, the extrinsic and intrinsic parameters of the cameras, and present experimental results.  相似文献   
117.
As biometric authentication systems become more prevalent, it is becoming increasingly important to evaluate their performance. This paper introduces a novel statistical method of performance evaluation for these systems. Given a database of authentication results from an existing system, the method uses a hierarchical random effects model, along with Bayesian inference techniques yielding posterior predictive distributions, to predict performance in terms of error rates using various explanatory variables. By incorporating explanatory variables as well as random effects, the method allows for prediction of error rates when the authentication system is applied to potentially larger and/or different groups of subjects than those originally documented in the database. We also extend the model to allow for prediction of the probability of a false alarm on a "watch-list" as a function of the list size. We consider application of our methodology to three different face authentication systems: a filter-based system, a Gaussian mixture model (GMM)-based system, and a system based on frequency domain representation of facial asymmetry  相似文献   
118.
PURPOSE: develop and validate a PET sorting algorithm based on the respiratory amplitude to correct for abnormal respiratory cycles. METHOD AND MATERIALS: using the 4D NCAT phantom model, 3D PET images were simulated in lung and other structures at different times within a respiratory cycle and noise was added. To validate the amplitude binning algorithm, NCAT phantom was used to simulate one case of five different respiratory periods and another case of five respiratory periods alone with five respiratory amplitudes. Comparison was performed for gated and un-gated images and for the new amplitude binning algorithm with the time binning algorithm by calculating the mean number of counts in the ROI (region of interest). RESULTS: an average of 8.87+/-5.10% improvement was reported for total 16 tumors with different tumor sizes and different T/B (tumor to background) ratios using the new sorting algorithm. As both the T/B ratio and tumor size decreases, image degradation due to respiration increases. The greater benefit for smaller diameter tumor and lower T/B ratio indicates a potential improvement in detecting more problematic tumors.  相似文献   
119.
This paper describes novel transcoding techniques aimed for low-complexity MPEG-2 to H.264/AVC transcoding. An important application for this type of conversion is efficient storage of broadcast video in consumer devices. The architecture for such a system is presented, which includes novel motion mapping and mode decision algorithms. For the motion mapping, two algorithms are presented. Both efficiently map incoming MPEG-2 motion vectors to outgoing H.264/AVC motion vectors regardless of the block sizes that the motion vectors correspond to. In addition, the algorithm maps motion vectors to different reference pictures, which is useful for picture type conversion and prediction from multiple reference pictures. We also propose an efficient rate-distortion optimised macroblock coding mode decision algorithm, which first evaluates candidate modes based on a simple cost function so that a reduced set of candidate modes is formed, then based on this reduced set, we evaluate the more complex Lagrangian cost calculation to determine the coding mode. Extensive simulation results show that our proposed transcoder incorporating the proposed algorithms achieves very good rate-distortion performance with low complexity. Compared with the cascaded decoder-encoder solution, the coding efficiency is maintained while the complexity is significantly reduced.
Shun-ichi SekiguchiEmail:
  相似文献   
120.
Cancer chemoprevention approaches use either pharmacological or dietary agents to impede, arrest or reverse the carcinogenic process. Although several agents have shown effectiveness against colon cancer, present intervention strategies provide only partial reduction. In this study, we utilized high-resolution endoscopy to obtain colon tumor biopsy specimens from Apc mutant mice before and after 2-wk sulindac intervention. To acquire information beyond genomics, proteome analysis using the ProteomeLab PF2D platform was implemented to generate 2-D protein expression maps from biopsies. Chromatograms produced common signature profiles between sulindac and nonsulindac treated samples, and contrasting profiles termed "fingerprints". We selected a double peak that appeared in tumor biopsies from sulindac-treated mice. Further analyses using MS sequencing identified this protein as histone H2B. The location of H2B in the 1(st) dimension strongly suggested PTM, consistent with identification of two oxidized methionines. While further studies on sulindac proteomic fingerprints are underway, this study demonstrates the feasibility and advantages of "real-time" proteomic analysis for obtaining information on biomarker discovery and drug activity that would not be revealed by a genetic assay. This approach should be broadly applicable for assessing lesion responsiveness in a wide range of translational and human clinical studies.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号