首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   4898篇
  免费   180篇
  国内免费   5篇
电工技术   39篇
综合类   5篇
化学工业   1177篇
金属工艺   82篇
机械仪表   81篇
建筑科学   217篇
矿业工程   20篇
能源动力   168篇
轻工业   371篇
水利工程   36篇
石油天然气   36篇
武器工业   2篇
无线电   436篇
一般工业技术   811篇
冶金工业   886篇
原子能技术   14篇
自动化技术   702篇
  2023年   32篇
  2022年   72篇
  2021年   92篇
  2020年   59篇
  2019年   72篇
  2018年   88篇
  2017年   91篇
  2016年   89篇
  2015年   88篇
  2014年   131篇
  2013年   367篇
  2012年   222篇
  2011年   299篇
  2010年   192篇
  2009年   223篇
  2008年   222篇
  2007年   239篇
  2006年   200篇
  2005年   156篇
  2004年   125篇
  2003年   118篇
  2002年   129篇
  2001年   98篇
  2000年   81篇
  1999年   81篇
  1998年   111篇
  1997年   105篇
  1996年   81篇
  1995年   96篇
  1994年   63篇
  1993年   60篇
  1992年   47篇
  1991年   51篇
  1990年   65篇
  1989年   49篇
  1988年   46篇
  1987年   39篇
  1986年   42篇
  1985年   57篇
  1984年   61篇
  1983年   46篇
  1982年   44篇
  1981年   39篇
  1980年   44篇
  1979年   49篇
  1978年   51篇
  1977年   42篇
  1976年   41篇
  1975年   38篇
  1973年   39篇
排序方式: 共有5083条查询结果,搜索用时 0 毫秒
101.
Structured text is a general concept that is implicit in a variety of approaches to handling information. Syntactically, an item of structured text is a number of grammatically simple phrases together with a semantic label for each phrase. Items of structured text may be nested within larger items of structured text. The semantic labels in a structured text are meant to parameterize a stereotypical situation, and so a particular item of structured text is an instance of that stereotypical situation. Much information is potentially available as structured text including tagged text in XML, text in relational and object-oriented databases, and the output from information extraction systems in the form of instantiated templates. In this paper, we formalize the concept of structured text, and then focus on how we can identify inconsistency in the logical representation of items of structured text. We then present a new framework for merging logical theories that can be employed to merge inconsistent items of structured text. To illustrate, we consider the problem of merging reports such as weather reports.  相似文献   
102.
The possibility of developing a simple, inexpensive and specific personal passive “real-time” air sampler incorporating a biosensor for formic acid was investigated. The sensor is based on the enzymatic reaction between formic acid and formate dehydrogenase (FDH) with nicotinamide adenine dinucleotide (NAD+) as a co-factor and Meldola's blue as mediator. An effective way to immobilise the enzyme, co-factor and Meldola's blue on screen-printed, disposable, electrodes was found to be in a mixture of glycerol and phosphate buffer covered with a gas-permeable membrane. Steady-state current was reached after 4–15 min and the limit of detection was calculated to be below 1 mg/m3. However, the response decreased by 50% after storage at −15°C for 1 day.  相似文献   
103.
TTCN-3 is an abstract language for specification of Abstract Test Suites. Coding of TTCN-3 values into physically transmittable messages and decoding of bitstrings into their TTCN-3 representation has been removed from the language itself and relayed to external and specialized components, called CoDec. CoDec development, either implicitly or explicitly, is a must in any TTCN-3 testing activity. Field experience showed that there is a high cost associated with CoDec development and maintenance. To achieve adequate software engineering practices, a set of types, tools and definitions were developed. This paper unveils gray areas in TTCN-3 architecture and presents a methodological approach to minimize the complexity of CoDec development. Even though the initial field of application is IPv6 testing, the main tool introduced—the CoDec Generator—is a valuable tool in any testing application domain. It is designed to lower the CoDec maintenance costs in all test case lifecycle stages, from development to maintenance. This work has been partly supported by the IST Go4IT European project: .  相似文献   
104.
The fracture resistance of structures is optimized using the level-set method. Fracture resistance is assumed to be related to the elastic energy released by a crack propagating in a normal direction from parts of the boundary that are in tension, and is calculated using the virtual crack extension technique. The shape derivative of the fracture-resistance objective function is derived. Two illustrative two-dimensional case studies are presented: a hole in a plate subjected to biaxial strain; and a bridge fixed at both ends subjected to a single load in which the compliance and fracture resistance are jointly optimized. The structures obtained have rounded corners and more material at places where they are in tension. Based on the results, we propose that fracture resistance may be modeled more easily but less directly by including a term proportional to surface area in the objective function, in conjunction with nonlinear elasticity where the Young’s modulus in tension is lower than in compression.  相似文献   
105.
Sheep shearers are known to work in sustained flexed postures and have a high prevalence of low back pain (LBP). As sustained posture and spinal movement asymmetry under substantial loads are known risk factors for back injury our aim was to describe the 3D spinal movement of shearers while working. We hypothesised that thoraco-lumbar and lumbo-sacral movement would be tri-axial, asymmetric, and task specific. Sufficient retro-reflective markers were placed on the trunk of 12 shearers to define thoraco-lumbar and lumbo-sacral 3D motion during three tasks. Thoraco-lumbar movement consistently involved flexion, left lateral flexion, and right rotation. Lumbo-sacral movement consistently involved right lateral flexion in flexion with minimal rotation. Shearers therefore work in sustained spinal flexion where concurrent, asymmetric spinal movements into both lateral flexion and rotation occur. These asymmetric movements combined with repetitive loading may be risk factors leading to the high incidence of LBP in this occupational group.  相似文献   
106.
Most variational active contour models are designed to find local minima of data-dependent energy functionals with the hope that reasonable initial placement of the active contour will drive it toward a "desirable" local minimum as opposed to an undesirable configuration due to noise or complex image structure. As such, there has been much research into the design of complex region-based energy functionals that are less likely to yield undesirable local minima when compared to simpler edge-based energy functionals whose sensitivity to noise and texture is significantly worse. Unfortunately, most of these more "robust" region-based energy functionals are applicable to a much narrower class of imagery compared to typical edge-based energies due to stronger global assumptions about the underlying image data. Devising new implementation algorithms for active contours that attempt to capture more global minimizers of already proposed image-based energies would allow us to choose an energy that makes sense for a particular class of energy without concern over its sensitivity to local minima. Such implementations have been proposed for capturing global minima. However, sometimes the completely-global minimum is just as undesirable as a minimum that is too local. In this paper, we propose a novel, fast, and flexible dual front implementation of active contours, motivated by minimal path techniques and utilizing fast sweeping algorithms, which is easily manipulated to yield minima with variable "degrees" of localness and globalness. By simply adjusting the size of active regions, the ability to gracefully move from capturing minima that are more local (according to the initial placement of the active contour/surface) to minima that are more global allows this model to more easily obtain "desirable" minimizers (which often are neither the most local nor the most global). Experiments on various 2D and 3D images and comparisons with some active contour models and region-growing methods are also given to illustrate the properties of this model and its performance in a variety of segmentation applications.  相似文献   
107.
This paper addresses the problem of calibrating camera parameters using variational methods. One problem addressed is the severe lens distortion in low-cost cameras. For many computer vision algorithms aiming at reconstructing reliable representations of 3D scenes, the camera distortion effects will lead to inaccurate 3D reconstructions and geometrical measurements if not accounted for. A second problem is the color calibration problem caused by variations in camera responses that result in different color measurements and affects the algorithms that depend on these measurements. We also address the extrinsic camera calibration that estimates relative poses and orientations of multiple cameras in the system and the intrinsic camera calibration that estimates focal lengths and the skew parameters of the cameras. To address these calibration problems, we present multiview stereo techniques based on variational methods that utilize partial and ordinary differential equations. Our approach can also be considered as a coordinated refinement of camera calibration parameters. To reduce computational complexity of such algorithms, we utilize prior knowledge on the calibration object, making a piecewise smooth surface assumption, and evolve the pose, orientation, and scale parameters of such a 3D model object without requiring a 2D feature extraction from camera views. We derive the evolution equations for the distortion coefficients, the color calibration parameters, the extrinsic and intrinsic parameters of the cameras, and present experimental results.  相似文献   
108.
As biometric authentication systems become more prevalent, it is becoming increasingly important to evaluate their performance. This paper introduces a novel statistical method of performance evaluation for these systems. Given a database of authentication results from an existing system, the method uses a hierarchical random effects model, along with Bayesian inference techniques yielding posterior predictive distributions, to predict performance in terms of error rates using various explanatory variables. By incorporating explanatory variables as well as random effects, the method allows for prediction of error rates when the authentication system is applied to potentially larger and/or different groups of subjects than those originally documented in the database. We also extend the model to allow for prediction of the probability of a false alarm on a "watch-list" as a function of the list size. We consider application of our methodology to three different face authentication systems: a filter-based system, a Gaussian mixture model (GMM)-based system, and a system based on frequency domain representation of facial asymmetry  相似文献   
109.
PURPOSE: develop and validate a PET sorting algorithm based on the respiratory amplitude to correct for abnormal respiratory cycles. METHOD AND MATERIALS: using the 4D NCAT phantom model, 3D PET images were simulated in lung and other structures at different times within a respiratory cycle and noise was added. To validate the amplitude binning algorithm, NCAT phantom was used to simulate one case of five different respiratory periods and another case of five respiratory periods alone with five respiratory amplitudes. Comparison was performed for gated and un-gated images and for the new amplitude binning algorithm with the time binning algorithm by calculating the mean number of counts in the ROI (region of interest). RESULTS: an average of 8.87+/-5.10% improvement was reported for total 16 tumors with different tumor sizes and different T/B (tumor to background) ratios using the new sorting algorithm. As both the T/B ratio and tumor size decreases, image degradation due to respiration increases. The greater benefit for smaller diameter tumor and lower T/B ratio indicates a potential improvement in detecting more problematic tumors.  相似文献   
110.
This paper describes novel transcoding techniques aimed for low-complexity MPEG-2 to H.264/AVC transcoding. An important application for this type of conversion is efficient storage of broadcast video in consumer devices. The architecture for such a system is presented, which includes novel motion mapping and mode decision algorithms. For the motion mapping, two algorithms are presented. Both efficiently map incoming MPEG-2 motion vectors to outgoing H.264/AVC motion vectors regardless of the block sizes that the motion vectors correspond to. In addition, the algorithm maps motion vectors to different reference pictures, which is useful for picture type conversion and prediction from multiple reference pictures. We also propose an efficient rate-distortion optimised macroblock coding mode decision algorithm, which first evaluates candidate modes based on a simple cost function so that a reduced set of candidate modes is formed, then based on this reduced set, we evaluate the more complex Lagrangian cost calculation to determine the coding mode. Extensive simulation results show that our proposed transcoder incorporating the proposed algorithms achieves very good rate-distortion performance with low complexity. Compared with the cascaded decoder-encoder solution, the coding efficiency is maintained while the complexity is significantly reduced.
Shun-ichi SekiguchiEmail:
  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号