首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   5729篇
  免费   357篇
  国内免费   5篇
电工技术   75篇
综合类   13篇
化学工业   1446篇
金属工艺   138篇
机械仪表   115篇
建筑科学   402篇
矿业工程   29篇
能源动力   146篇
轻工业   450篇
水利工程   49篇
石油天然气   7篇
无线电   449篇
一般工业技术   1231篇
冶金工业   246篇
原子能技术   37篇
自动化技术   1258篇
  2024年   7篇
  2023年   96篇
  2022年   97篇
  2021年   228篇
  2020年   149篇
  2019年   128篇
  2018年   189篇
  2017年   165篇
  2016年   248篇
  2015年   242篇
  2014年   301篇
  2013年   400篇
  2012年   379篇
  2011年   462篇
  2010年   356篇
  2009年   340篇
  2008年   342篇
  2007年   317篇
  2006年   240篇
  2005年   211篇
  2004年   159篇
  2003年   149篇
  2002年   127篇
  2001年   81篇
  2000年   78篇
  1999年   66篇
  1998年   75篇
  1997年   45篇
  1996年   49篇
  1995年   57篇
  1994年   34篇
  1993年   33篇
  1992年   29篇
  1991年   20篇
  1990年   18篇
  1989年   18篇
  1988年   15篇
  1987年   13篇
  1986年   8篇
  1985年   5篇
  1984年   23篇
  1983年   9篇
  1982年   9篇
  1981年   7篇
  1980年   7篇
  1979年   5篇
  1978年   6篇
  1976年   9篇
  1975年   6篇
  1974年   5篇
排序方式: 共有6091条查询结果,搜索用时 15 毫秒
131.
In this paper, we present a novel technique which simulates directional light scattering for more realistic interactive visualization of volume data. Our method extends the recent directional occlusion shading model by enabling light source positioning with practically no performance penalty. Light transport is approximated using a tilted cone‐shaped function which leaves elliptic footprints in the opacity buffer during slice‐based volume rendering. We perform an incremental blurring operation on the opacity buffer for each slice in front‐to‐back order. This buffer is then used to define the degree of occlusion for the subsequent slice. Our method is capable of generating high‐quality soft shadowing effects, allows interactive modification of all illumination and rendering parameters, and requires no pre‐computation.  相似文献   
132.
Consider a rooted tree T of arbitrary maximum degree d representing a collection of n web pages connected via a set of links, all reachable from a source home page represented by the root of T. Each web page i carries a probability p i representative of the frequency with which it is visited. By adding hotlinks—shortcuts from a node to one of its descendents—we wish to minimize the expected number of steps l needed to visit pages from the home page, expressed as a function of the entropy H(p) of the access probabilities p. This paper introduces several new strategies for effectively assigning hotlinks in a tree. For assigning exactly one hotlink per node, our method guarantees an upper bound on l of 1.141H(p)+1 if d>2 and 1.08H(p)+2/3 if d=2. We also present the first efficient general methods for assigning at most k hotlinks per node in trees of arbitrary maximum degree, achieving bounds on l of at most \frac2H(p)log(k+1)+1\frac{2H(p)}{\log(k+1)}+1 and \fracH(p)log(k+d)-logd+1\frac{H(p)}{\log(k+d)-\log d}+1 , respectively. All our methods are strong, i.e., they provide the same guarantees on all subtrees after the assignment. We also present an algorithm implementing these methods in O(nlog n) time, an improvement over the previous O(n 2) time algorithms. Finally we prove a Ω(nlog n) lower bound on the running time of any strong method that guarantee an average access time strictly better than 2H(p).  相似文献   
133.
We argue that expert finding is sensitive to multiple document features in an organizational intranet. These document features include multiple levels of associations between experts and a query topic from sentence, paragraph, up to document levels, document authority information such as the PageRank, indegree, and URL length of documents, and internal document structures that indicate the experts’ relationship with the content of documents. Our assumption is that expert finding can largely benefit from the incorporation of these document features. However, existing language modeling approaches for expert finding have not sufficiently taken into account these document features. We propose a novel language modeling approach, which integrates multiple document features, for expert finding. Our experiments on two large scale TREC Enterprise Track datasets, i.e., the W3C and CSIRO datasets, demonstrate that the natures of the two organizational intranets and two types of expert finding tasks, i.e., key contact finding for CSIRO and knowledgeable person finding for W3C, influence the effectiveness of different document features. Our work provides insights into which document features work for certain types of expert finding tasks, and helps design expert finding strategies that are effective for different scenarios. Our main contribution is to develop an effective formal method for modeling multiple document features in expert finding, and conduct a systematic investigation of their effects. It is worth noting that our novel approach achieves better results in terms of MAP than previous language model based approaches and the best automatic runs in both the TREC2006 and TREC2007 expert search tasks, respectively.  相似文献   
134.
135.
ContextSoftware quality is a complex concept. Therefore, assessing and predicting it is still challenging in practice as well as in research. Activity-based quality models break down this complex concept into concrete definitions, more precisely facts about the system, process, and environment as well as their impact on activities performed on and with the system. However, these models lack an operationalisation that would allow them to be used in assessment and prediction of quality. Bayesian networks have been shown to be a viable means for this task incorporating variables with uncertainty.ObjectiveThe qualitative knowledge contained in activity-based quality models are an abundant basis for building Bayesian networks for quality assessment. This paper describes a four-step approach for deriving systematically a Bayesian network from an assessment goal and a quality model.MethodThe four steps of the approach are explained in detail and with running examples. Furthermore, an initial evaluation is performed, in which data from NASA projects and an open source system is obtained. The approach is applied to this data and its applicability is analysed.ResultsThe approach is applicable to the data from the NASA projects and the open source system. However, the predictive results vary depending on the availability and quality of the data, especially the underlying general distributions.ConclusionThe approach is viable in a realistic context but needs further investigation in case studies in order to analyse its predictive validity.  相似文献   
136.
Acceptance testing is a time-consuming task for complex software systems that have to fulfill a large number of requirements. To reduce this effort, we have developed a widely automated method for deriving test plans from requirements that are expressed in natural language. It consists of three stages: annotation, clustering, and test plan specification. The general idea is to exploit redundancies and implicit relationships in requirements specifications. Multi-viewpoint techniques based on RM-ODP (Reference Model for Open Distributed Processing) are employed for specifying the requirements. We then use linguistic analysis techniques, requirements clustering algorithms, and pattern-based requirements collection to reduce the total effort of testing against the requirements specification. In particular, we use linguistic analysis for extracting and annotating the actor, process and object of a requirements statement. During clustering, a similarity function is computed as a measure for the overlap of requirements. In the test plan specification stage, our approach provides capabilities for semi-automatically deriving test plans and acceptance criteria from the clustered informal textual requirements. Two patterns are applied to compute a suitable order of test activities. The generated test plans consist of a sequence of test steps and asserts that are executed or checked in the given order. We also present the supporting prototype tool TORC, which is available open source. For the evaluation of the approach, we have conducted a case study in the field of acceptance testing of a national electronic identification system. In summary, we report on lessons learned how linguistic analysis and clustering techniques can help testers in understanding the relations between requirements and for improving test planning.  相似文献   
137.
A Database and Evaluation Methodology for Optical Flow   总被引:4,自引:0,他引:4  
The quantitative evaluation of optical flow algorithms by Barron et al. (1994) led to significant advances in performance. The challenges for optical flow algorithms today go beyond the datasets and evaluation methods proposed in that paper. Instead, they center on problems associated with complex natural scenes, including nonrigid motion, real sensor noise, and motion discontinuities. We propose a new set of benchmarks and evaluation methods for the next generation of optical flow algorithms. To that end, we contribute four types of data to test different aspects of optical flow algorithms: (1) sequences with nonrigid motion where the ground-truth flow is determined by tracking hidden fluorescent texture, (2) realistic synthetic sequences, (3) high frame-rate video used to study interpolation error, and (4) modified stereo sequences of static scenes. In addition to the average angular error used by Barron et al., we compute the absolute flow endpoint error, measures for frame interpolation error, improved statistics, and results at motion discontinuities and in textureless regions. In October 2007, we published the performance of several well-known methods on a preliminary version of our data to establish the current state of the art. We also made the data freely available on the web at . Subsequently a number of researchers have uploaded their results to our website and published papers using the data. A significant improvement in performance has already been achieved. In this paper we analyze the results obtained to date and draw a large number of conclusions from them.  相似文献   
138.
The Center for Robot-Assisted Search and Rescue (CRASAR®) deployed a customized AEOS man-portable unmanned surface vehicle and two commercially available underwater vehicles (the autonomous YSI EcoMapper and the tethered VideoRay) for inspection of the Rollover Pass bridge in the Bolivar peninsula of Texas in the aftermath of Hurricane Ike. A preliminary domain analysis with the vehicles identified key tasks in subsurface bridge inspection (mapping of the debris field and inspecting the bridge footings for scour), control challenges (navigation under loss of GPS, underwater obstacle avoidance, and stable positioning in high currents without GPS), possible improvements to human-robot interaction (having additional display units so that mission specialists can view and operate on imagery independently of the operator control unit, incorporating 2-way audio to allow operator and field personnel to communicate while launching or recovering the vehicle, and increased state sensing for reliability), and discussed the cooperative use of surface, underwater, and aerial vehicles. The article posits seven milestones in the development of a fully functional UMV for bridge inspection: standardize mission payloads, add health monitoring, improve teleoperation through better human-robot interaction, add 3D obstacle avoidance, improve station-keeping, handle large data sets, and support cooperative sensing.  相似文献   
139.
Absorption-based opto-chemical sensors for oxygen are presented that consist of leuco dyes (leuco indigo and leuco thioindigo) incorporated into two kinds of polymer matrices. An irreversible and visible color change (to red or blue) is caused by a chromogenic chemistry involving the oxidation of the (virtually colorless) leuco dyes by molecular oxygen. The moderately gas permeable copolymer poly(styrene-co-acrylonitrile) and a highly oxygen-permeable polyurethane hydrogel, respectively, are used in order to increase the effective dynamic range for visualizing and detecting oxygen. We describe the preparation and properties of four different types of such oxygen sensors that are obtained by dip-coating a gas impermeable foil made from poly(ethylene terephthalate) with a sensor layer composed of leuco dye and polymer.  相似文献   
140.
On the purpose of Event-B proof obligations   总被引:2,自引:2,他引:0  
Event-B is a formal modelling method which is claimed to be suitable for diverse modelling domains, such as reactive systems and sequential program development. This claim hinges on the fact that any particular model has an appropriate semantics. In Event-B, this semantics is provided implicitly by proof obligations associated with a model. There is no fixed semantics though. In this article we argue that this approach is beneficial to modelling because we can use similar proof obligations across a variety of modelling domains. By way of two examples we show how similar proof obligations are linked to different semantics. A small set of proof obligations is thus suitable for a whole range of modelling problems in diverse modelling domains.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号