首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   5785篇
  免费   358篇
  国内免费   5篇
电工技术   76篇
综合类   13篇
化学工业   1494篇
金属工艺   138篇
机械仪表   115篇
建筑科学   402篇
矿业工程   29篇
能源动力   146篇
轻工业   451篇
水利工程   50篇
石油天然气   8篇
无线电   451篇
一般工业技术   1233篇
冶金工业   247篇
原子能技术   37篇
自动化技术   1258篇
  2024年   11篇
  2023年   102篇
  2022年   140篇
  2021年   229篇
  2020年   150篇
  2019年   128篇
  2018年   190篇
  2017年   165篇
  2016年   248篇
  2015年   242篇
  2014年   301篇
  2013年   400篇
  2012年   379篇
  2011年   462篇
  2010年   356篇
  2009年   340篇
  2008年   341篇
  2007年   317篇
  2006年   240篇
  2005年   211篇
  2004年   159篇
  2003年   150篇
  2002年   126篇
  2001年   81篇
  2000年   78篇
  1999年   67篇
  1998年   75篇
  1997年   45篇
  1996年   49篇
  1995年   57篇
  1994年   34篇
  1993年   33篇
  1992年   30篇
  1991年   20篇
  1990年   18篇
  1989年   18篇
  1988年   15篇
  1987年   13篇
  1986年   8篇
  1985年   5篇
  1984年   23篇
  1983年   9篇
  1982年   9篇
  1981年   7篇
  1980年   7篇
  1979年   5篇
  1978年   6篇
  1976年   9篇
  1975年   6篇
  1974年   5篇
排序方式: 共有6148条查询结果,搜索用时 15 毫秒
151.
In Use cases considered harmful, Simons has analyzed the logical weaknesses of the UML use case notation and has recommended to “fix the faulty notion of dependency” (Simons: Use cases considered harmful. 29th Conference on Techn. of OO Lang. and Syst., pp 194–203, 1999). The project sketched in this position paper is inspired by Simons’ critique. The main contribution of this paper is a detailed meta model of possible relations between use cases. Later in the project this meta model is then to be formalized in a natural deduction calculus which shall be implemented in the Prolog. As a result of such formalization a use case specification can be queried for inconsistencies as well as for test cases which must be observable after a software system is implemented based on such a use case specification. Software tool support for this method is also under development.  相似文献   
152.
We use Schnyder woods of 3-connected planar graphs to produce convex straight-line drawings on a grid of size The parameter depends on the Schnyder wood used for the drawing. This parameter is in the range The algorithm is a refinement of the face-counting algorithm; thus, in particular, the size of the grid is at most The above bound on the grid size simultaneously matches or improves all previously known bounds for convex drawings, in particular Schnyder's and the recent Zhang and He bound for triangulations and the Chrobak and Kant bound for 3-connected planar graphs. The algorithm takes linear time. The drawing algorithm has been implemented and tested. The expected grid size for the drawing of a random triangulation is close to For a random 3-connected plane graph, tests show that the expected size of the drawing is   相似文献   
153.
We show that the NP-hard optimization problems minimum and maximum weight exact satisfiability (XSAT) for a CNF formula C over n propositional variables equipped with arbitrary real-valued weights can be solved in O(||C||20.2441n ) time. To the best of our knowledge, the algorithms presented here are the first handling weighted XSAT optimization versions in non-trivial worst case time. We also investigate the corresponding weighted counting problems, namely we show that the number of all minimum, resp. maximum, weight exact satisfiability solutions of an arbitrarily weighted formula can be determined in O(n 2·||C||?+?20.40567n ) time. In recent years only the unweighted counterparts of these problems have been studied (Dahllöf and Jonsson, An algorithm for counting maximum weighted independent sets and its applications. In: Proceedings of the 13th ACM-SIAM Symposium on Discrete Algorithms, pp. 292–298, 2002; Dahllöf et al., Theor Comp Sci 320: 373–394, 2004; Porschen, On some weighted satisfiability and graph problems. In: Proceedings of the 31st Conference on Current Trends in Theory and Practice of Informatics (SOFSEM 2005). Lecture Notes in Comp. Science, vol. 3381, pp. 278–287. Springer, 2005).  相似文献   
154.
Mobile communications beyond 3G will integrate different (but complementary) access technologies into a common platform to deliver value-added services and multimedia content in an optimum way. However, the numerous possible configurations of mobile networks complicated the dynamic deployment of mobile applications. Therefore, research is intensely seeking a service provisioning framework that is technology-independent, supports multiple wireless network technologies, and can interwork high-level service management tasks to network management operations. This paper presents an open value chain paradigm, a model for downloadable applications and a mediating platform for service provisioning in beyond 3G mobile settings. Furthermore, we introduce mechanisms that support a coupled interaction between service deployment and network configuration operations, focusing on the dynamic provisioning of QoS state to data path devices according to the requirements of dynamically downloadable mobile value-added services (VAS).
Vangelis GazisEmail:
  相似文献   
155.
Illustrative context-preserving exploration of volume data   总被引:2,自引:0,他引:2  
In volume rendering, it is very difficult to simultaneously visualize interior and exterior structures while preserving clear shape cues. Highly transparent transfer functions produce cluttered images with many overlapping structures, while clipping techniques completely remove possibly important context information. In this paper, we present a new model for volume rendering, inspired by techniques from illustration. It provides a means of interactively inspecting the interior of a volumetric data set in a feature-driven way which retains context information. The context-preserving volume rendering model uses a function of shading intensity, gradient magnitude, distance to the eye point, and previously accumulated opacity to selectively reduce the opacity in less important data regions. It is controlled by two user-specified parameters. This new method represents an alternative to conventional clipping techniques, sharing their easy and intuitive user control, but does not suffer from the drawback of missing context information  相似文献   
156.
Exploded views are an illustration technique where an object is partitioned into several segments. These segments are displaced to reveal otherwise hidden detail. In this paper we apply the concept of exploded views to volumetric data in order to solve the general problem of occlusion. In many cases an object of interest is occluded by other structures. While transparency or cutaways can be used to reveal a focus object, these techniques remove parts of the context information. Exploded views, on the other hand, do not suffer from this drawback. Our approach employs a force-based model: the volume is divided into a part configuration controlled by a number of forces and constraints. The focus object exerts an explosion force causing the parts to arrange according to the given constraints. We show that this novel and flexible approach allows for a wide variety of explosion-based visualizations including view-dependent explosions. Furthermore, we present a high-quality GPU-based volume ray casting algorithm for exploded views which allows rendering and interaction at several frames per second.  相似文献   
157.
Uzquiano (Analysis 70:39–44, 2010) showed that the Hardest Logic Puzzle Ever (HLPE) [in its amended form due to Rabern and Rabern (Analysis 68:105–112, 2008)] has a solution in only two questions. Uzquiano concludes his paper by noting that his solution strategy naturally suggests a harder variation of the puzzle which, as he remarks, he does not know how to solve in two questions. Wheeler and Barahona (J Philos Logic, to appear, 2011) formulated a three question solution to Uzquiano’s puzzle and gave an information theoretic argument to establish that a two question solution for Uzquiano’s puzzle does not exist. However, their argument crucially relies on a certain conception of what it means to answer self-referential yes–no questions truly and falsely. We propose an alternative such conception which, as we show, allows one to solve Uzquiano’s puzzle in two questions. The solution strategy adopted suggests an even harder variation of Uzquiano’s puzzle which, as we will show, can also be solved in two questions. Just as all previous solutions to versions of HLPE, our solution is presented informally. The second part of the paper investigates the prospects of formally representing solutions to HLPE by exploiting theories of truth.  相似文献   
158.
One of the key problems in accelerometry based gait analyses is that it may not be possible to attach an accelerometer to the lower trunk so that its axes are perfectly aligned to the axes of the subject. In this paper we will present an algorithm that was designed to virtually align the axes of the accelerometer to the axes of the subject during walking sections. This algorithm is based on a physically reasonable approach and built for measurements in unsupervised settings, where the test persons are applying the sensors by themselves. For evaluation purposes we conducted a study with 6 healthy subjects and measured their gait with a manually aligned and a skewed accelerometer attached to the subject's lower trunk. After applying the algorithm the intra-axis correlation of both sensors was on average 0.89±0.1 with a mean absolute error of 0.05g. We concluded that the algorithm was able to adjust the skewed sensor node virtually to the coordinate system of the subject.  相似文献   
159.
This article describes the concept of a "Central Data Management" (CDM) and its implementation within the large-scale population-based medical research project "Personalized Medicine". The CDM can be summarized as a conjunction of data capturing, data integration, data storage, data refinement, and data transfer. A wide spectrum of reliable "Extract Transform Load" (ETL) software for automatic integration of data as well as "electronic Case Report Forms" (eCRFs) was developed, in order to integrate decentralized and heterogeneously captured data. Due to the high sensitivity of the captured data, high system resource availability, data privacy, data security and quality assurance are of utmost importance. A complex data model was developed and implemented using an Oracle database in high availability cluster mode in order to integrate different types of participant-related data. Intelligent data capturing and storage mechanisms are improving the quality of data. Data privacy is ensured by a multi-layered role/right system for access control and de-identification of identifying data. A well defined backup process prevents data loss. Over the period of one and a half year, the CDM has captured a wide variety of data in the magnitude of approximately 5terabytes without experiencing any critical incidents of system breakdown or loss of data. The aim of this article is to demonstrate one possible way of establishing a Central Data Management in large-scale medical and epidemiological studies.  相似文献   
160.
Herman’s algorithm is a synchronous randomized protocol for achieving self-stabilization in a token ring consisting of N processes. The interaction of tokens makes the dynamics of the protocol very difficult to analyze. In this paper we study the distribution of the time to stabilization, assuming that there are three tokens in the initial configuration. We show for arbitrary N and for an arbitrary timeout t that the probability of stabilization within time t is minimized by choosing as the initial three-token configuration the configuration in which the tokens are placed equidistantly on the ring. Our result strengthens a corollary of a theorem of McIver and Morgan (Inf. Process Lett. 94(2): 79–84, 2005), which states that the expected stabilization time is minimized by the equidistant configuration.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号