首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   15510篇
  免费   826篇
  国内免费   28篇
电工技术   166篇
综合类   43篇
化学工业   3341篇
金属工艺   307篇
机械仪表   315篇
建筑科学   685篇
矿业工程   45篇
能源动力   503篇
轻工业   1228篇
水利工程   124篇
石油天然气   51篇
无线电   1112篇
一般工业技术   2961篇
冶金工业   2239篇
原子能技术   117篇
自动化技术   3127篇
  2023年   165篇
  2022年   359篇
  2021年   573篇
  2020年   374篇
  2019年   380篇
  2018年   496篇
  2017年   461篇
  2016年   531篇
  2015年   465篇
  2014年   594篇
  2013年   1060篇
  2012年   971篇
  2011年   1155篇
  2010年   807篇
  2009年   747篇
  2008年   788篇
  2007年   774篇
  2006年   581篇
  2005年   504篇
  2004年   371篇
  2003年   381篇
  2002年   338篇
  2001年   213篇
  2000年   186篇
  1999年   205篇
  1998年   211篇
  1997年   185篇
  1996年   183篇
  1995年   177篇
  1994年   153篇
  1993年   147篇
  1992年   119篇
  1991年   74篇
  1990年   122篇
  1989年   116篇
  1988年   80篇
  1987年   93篇
  1986年   100篇
  1985年   100篇
  1984年   88篇
  1983年   76篇
  1982年   96篇
  1981年   90篇
  1980年   71篇
  1979年   70篇
  1978年   62篇
  1977年   74篇
  1976年   61篇
  1975年   45篇
  1974年   41篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
41.
Some of the current best conformant probabilistic planners focus on finding a fixed length plan with maximal probability. While these approaches can find optimal solutions, they often do not scale for large problems or plan lengths. As has been shown in classical planning, heuristic search outperforms bounded length search (especially when an appropriate plan length is not given a priori). The problem with applying heuristic search in probabilistic planning is that effective heuristics are as yet lacking.In this work, we apply heuristic search to conformant probabilistic planning by adapting planning graph heuristics developed for non-deterministic planning. We evaluate a straight-forward application of these planning graph techniques, which amounts to exactly computing a distribution over many relaxed planning graphs (one planning graph for each joint outcome of uncertain actions at each time step). Computing this distribution is costly, so we apply Sequential Monte Carlo (SMC) to approximate it. One important issue that we explore in this work is how to automatically determine the number of samples required for effective heuristic computation. We empirically demonstrate on several domains how our efficient, but sometimes suboptimal, approach enables our planner to solve much larger problems than an existing optimal bounded length probabilistic planner and still find reasonable quality solutions.  相似文献   
42.
The objective of this paper is to propose an architecture for aCAble TeleVision (CATV) network, capable of supporting two-way transmission. This evolution is necessary for the survival of the CATV industry in an era of deregulation and of the development of the B-ISDN by the telecommunications companies. A communication transactional service is then considered and performance analysis is done under realistic assumptions.  相似文献   
43.
Ribonucleic acids (RNAs) possess great therapeutic potential and can be used to treat a variety of diseases. The unique biophysical properties of RNAs, such as high molecular weight, negative charge, hydrophilicity, low stability, and potential immunogenicity, require chemical modification and development of carriers to enable intracellular delivery of RNAs for clinical use. A variety of nanomaterials have been developed for the effective in vivo delivery of short/ small RNAs, messenger RNAs, and RNAs required for gene editing technologies including clustered regularly interspaced palindromic repeat (CRISPR)/Cas. This review outlines the challenges of delivering RNA therapeutics, explores the chemical synthesis of RNA modifications and carriers, and describes the efforts to design nanomaterials that can be used for a variety of clinical indications.  相似文献   
44.
The Body-Centered Cubic (BCC) and Face-Centered Cubic (FCC) lattices have been analytically shown to be more efficient sampling lattices than the traditional Cartesian Cubic (CC) lattice, but there has been no estimate of their visual comparability. Two perceptual studies (each with N = 12 participants) compared the visual quality of images rendered from BCC and FCC lattices to images rendered from the CC lattice. Images were generated from two signals: the commonly used Marschner-Lobb synthetic function and a computed tomography scan of a fish tail. Observers found that BCC and FCC could produce images of comparable visual quality to CC, using 30-35 percent fewer samples. For the images used in our studies, the L(2) error metric shows high correlation with the judgement of human observers. Using the L(2) metric as a proxy, the results of the experiments appear to extend across a wide range of images and parameter choices.  相似文献   
45.
We consider the on-line version of the maximum vertex disjoint path problem when the underlying network is a tree. In this problem, a sequence of requests arrives in an on-line fashion, where every request is a path in the tree. The on-line algorithm may accept a request only if it does not share a vertex with a previously accepted request. The goal is to maximize the number of accepted requests. It is known that no on-line algorithm can have a competitive ratio better than Ω(log n) for this problem, even if the algorithm is randomized and the tree is simply a line. Obviously, it is desirable to beat the logarithmic lower bound. Adler and Azar (Proc. of the 10th ACM-SIAM Symposium on Discrete Algorithm, pp. 1–10, 1999) showed that if preemption is allowed (namely, previously accepted requests may be discarded, but once a request is discarded it can no longer be accepted), then there is a randomized on-line algorithm that achieves constant competitive ratio on the line. In the current work we present a randomized on-line algorithm with preemption that has constant competitive ratio on any tree. Our results carry over to the related problem of maximizing the number of accepted paths subject to a capacity constraint on vertices (in the disjoint path problem this capacity is 1). Moreover, if the available capacity is at least 4, randomization is not needed and our on-line algorithm becomes deterministic.  相似文献   
46.
Instructors in higher education are disseminating instructional content via podcasting, as many rally behind the technology’s potential benefits. Others have expressed concern about the risks of deleterious effects that might accompany the adoption of podcasting, such as lower class attendance. Yet, relatively few studies have investigated students’ perceptions of podcasting for educational purposes, especially in relation to different podcasting forms: repetitive and supplemental. The present study explored students’ readiness and attitudes towards these two forms of podcasting to provide fundamental information for future researchers and educators. The results indicated that students may not be as ready or eager to use podcasting for repetitive or supplemental educational purposes as much as we think they are, but they could be persuaded.  相似文献   
47.
Modern large-scale grid computing systems for processing advanced science and engineering applications rely on geographically distributed clusters. In such highly distributed environments, estimating the available bandwidth between clusters is a key issue for efficient task scheduling. We analyze the performance of two well known available bandwidth estimation tools, pathload and abget, with the aim of using them in grid environments. Differently than previous investigations (Jain et al., ; Shriram et al., in Passive and active network measurement: 6th international workshop, PAM 2005. Springer, Berlin, 2005), our experiments consider a series of relevant metrics such as accuracy of the estimation, convergence time, degree of intrusion in the grid links, and ability to handle multiple simultaneous estimations. No previous work has analyzed the use of available bandwidth tools for the derivation of efficient grid scheduling.  相似文献   
48.
Fuzzy grey relational analysis for software effort estimation   总被引:1,自引:1,他引:0  
Accurate and credible software effort estimation is a challenge for academic research and software industry. From many software effort estimation models in existence, Estimation by Analogy (EA) is still one of the preferred techniques by software engineering practitioners because it mimics the human problem solving approach. Accuracy of such a model depends on the characteristics of the dataset, which is subject to considerable uncertainty. The inherent uncertainty in software attribute measurement has significant impact on estimation accuracy because these attributes are measured based on human judgment and are often vague and imprecise. To overcome this challenge we propose a new formal EA model based on the integration of Fuzzy set theory with Grey Relational Analysis (GRA). Fuzzy set theory is employed to reduce uncertainty in distance measure between two tuples at the k th continuous feature ( | ( xo(k) - xi(k) | ) \left( {\left| {\left( {{x_o}(k) - {x_i}(k)} \right.} \right|} \right) .GRA is a problem solving method that is used to assess the similarity between two tuples with M features. Since some of these features are not necessary to be continuous and may have nominal and ordinal scale type, aggregating different forms of similarity measures will increase uncertainty in the similarity degree. Thus the GRA is mainly used to reduce uncertainty in the distance measure between two software projects for both continuous and categorical features. Both techniques are suitable when relationship between effort and other effort drivers is complex. Experimental results showed that using integration of GRA with FL produced credible estimates when compared with the results obtained using Case-Based Reasoning, Multiple Linear Regression and Artificial Neural Networks methods.  相似文献   
49.
Confronted with decreasing margins and a rising customer demand for integrated solutions, manufacturing companies integrate complementary services into their portfolio. Offering value bundles (consisting of services and physical goods) takes place in integrated product–service systems, spanning the coordinated design and delivery of services and physical goods for customers. Conceptual Modeling is an established approach to support and guide such efforts. Using a framework for the design and delivery of value bundles as an analytical lens, this study evaluates the current support of reference models and modeling languages for setting up conceptual models for an integrated design and delivery of value bundles. Consecutively, designing modeling languages and reference models to fit the requirements of conceptual models in product–service systems are presented as upcoming challenges in Service Research. To guide further research, first steps are proposed by exemplarily integrating reference models and modeling languages stemming from the service and manufacturing domains.  相似文献   
50.
We propose a James-Stein-type shrinkage estimator for the parameter vector in a general linear model when it is suspected that some of the parameters may be restricted to a subspace. The James-Stein estimator is shown to demonstrate asymptotically superior risk performance relative to the conventional least squares estimator under quadratic loss. An extensive simulation study based on a multiple linear regression model and a logistic regression model further demonstrates the improved performance of this James-Stein estimator in finite samples. The application of this new estimator is illustrated using Ontario newborn infants data spanning four fiscal years.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号