首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   3790篇
  免费   291篇
  国内免费   4篇
电工技术   75篇
综合类   19篇
化学工业   964篇
金属工艺   81篇
机械仪表   65篇
建筑科学   311篇
矿业工程   18篇
能源动力   110篇
轻工业   267篇
水利工程   27篇
石油天然气   2篇
无线电   322篇
一般工业技术   815篇
冶金工业   245篇
原子能技术   14篇
自动化技术   750篇
  2024年   6篇
  2023年   58篇
  2022年   92篇
  2021年   148篇
  2020年   119篇
  2019年   98篇
  2018年   131篇
  2017年   110篇
  2016年   160篇
  2015年   166篇
  2014年   189篇
  2013年   267篇
  2012年   258篇
  2011年   316篇
  2010年   262篇
  2009年   215篇
  2008年   209篇
  2007年   210篇
  2006年   159篇
  2005年   151篇
  2004年   102篇
  2003年   89篇
  2002年   79篇
  2001年   51篇
  2000年   44篇
  1999年   53篇
  1998年   53篇
  1997年   34篇
  1996年   50篇
  1995年   31篇
  1994年   18篇
  1993年   19篇
  1992年   11篇
  1991年   11篇
  1990年   10篇
  1989年   13篇
  1988年   5篇
  1986年   8篇
  1985年   7篇
  1984年   4篇
  1982年   5篇
  1981年   4篇
  1979年   10篇
  1977年   6篇
  1976年   6篇
  1975年   5篇
  1974年   5篇
  1973年   6篇
  1970年   3篇
  1940年   2篇
排序方式: 共有4085条查询结果,搜索用时 15 毫秒
71.
Worst-case execution time (WCET) analysis is concerned with computing a precise-as-possible bound for the maximum time the execution of a program can take. This information is indispensable for developing safety-critical real-time systems, e. g., in the avionics and automotive fields. Starting with the initial works of Chen, Mok, Puschner, Shaw, and others in the mid and late 1980s, WCET analysis turned into a well-established and vibrant field of research and development in academia and industry. The increasing number and diversity of hardware and software platforms and the ongoing rapid technological advancement became drivers for the development of a wide array of distinct methods and tools for WCET analysis. The precision, generality, and efficiency of these methods and tools depend much on the expressiveness and usability of the annotation languages that are used to describe feasible and infeasible program paths. In this article we survey the annotation languages which we consider formative for the field. By investigating and comparing their individual strengths and limitations with respect to a set of pivotal criteria, we provide a coherent overview of the state of the art. Identifying open issues, we encourage further research. This way, our approach is orthogonal and complementary to a recent approach of Wilhelm et al. who provide a thorough survey of WCET analysis methods and tools that have been developed and used in academia and industry.  相似文献   
72.
IntroductionAll hospitals in the province of Styria (Austria) are well equipped with sophisticated Information Technology, which provides all-encompassing on-screen patient information. Previous research made on the theoretical properties, advantages and disadvantages, of reading from paper vs. reading from a screen has resulted in the assumption that reading from a screen is slower, less accurate and more tiring. However, recent flat screen technology, especially on the basis of LCD, is of such high quality that obviously this assumption should now be challenged. As the electronic storage and presentation of information has many advantages in addition to a faster transfer and processing of the information, the usage of electronic screens in clinics should outperform the traditional hardcopy in both execution and preference ratings.This study took part in a County hospital Styria, Austria, with 111 medical professionals, working in a real-life setting. They were each asked to read original and authentic diagnosis reports, a gynecological report and an internal medical document, on both screen and paper in a randomly assigned order. Reading comprehension was measured by the Chunked Reading Test, and speed and accuracy of reading performance was quantified. In order to get a full understanding of the clinicians' preferences, subjective ratings were also collected.ResultsWilcoxon Signed Rank Tests showed no significant differences on reading performance between paper vs. screen. However, medical professionals showed a significant (90%) preference for reading from paper. Despite the high quality and the benefits of electronic media, paper still has some qualities which cannot provided electronically do date.  相似文献   
73.
The display units integrated in today's head-mounted displays (HMDs) provide only a limited field of view (FOV) to the virtual world. In order to present an undistorted view to the virtual environment (VE), the perspective projection used to render the VE has to be adjusted to the limitations caused by the HMD characteristics. In particular, the geometric field of view (GFOV), which defines the virtual aperture angle used for rendering of the 3D scene, is set up according to the display field of view (DFOV). A discrepancy between these two fields of view distorts the geometry of the VE in a way that either minifies or magnifies the imagery displayed to the user. It has been shown that this distortion has the potential to affect a user's perception of the virtual space, sense of presence, and performance on visual search tasks. In this paper, we analyze the user's perception of a VE displayed in a HMD, which is rendered with different GFOVs. We introduce a psychophysical calibration method to determine the HMD's actual field of view, which may vary from the nominal values specified by the manufacturer. Furthermore, we conducted two experiments to identify perspective projections for HMDs, which are identified as natural by subjects--even if these perspectives deviate from the perspectives that are inherently defined by the DFOV. In the first experiment, subjects had to adjust the GFOV for a rendered virtual laboratory such that their perception of the virtual replica matched the perception of the real laboratory, which they saw before the virtual one. In the second experiment, we displayed the same virtual laboratory, but restricted the viewing condition in the real world to simulate the limited viewing condition in a HMD environment. We found that subjects evaluate a GFOV as natural when it is larger than the actual DFOV of the HMD--in some cases up to 50 percent--even when subjects viewed the real space with a limited field of view.  相似文献   
74.
Multimedia delivery in mobile multiaccess network environments has emerged as a key area within the future Internet research domain. When network heterogeneity is coupled with the proliferation of multiaccess capabilities in mobile handheld devices, one can expect many new avenues for developing novel services and applications. New mechanisms for audio/video delivery over multiaccess networks will define the next generation of major distribution technologies, but will require significantly more information to operate according to their best potential. In this paper we present and evaluate a distributed information service, which can enhance media delivery over such multiaccess networks. We describe the proposed information service, which is built upon the new distributed control and management framework (DCMF) and the mobility management triggering functionality (TRG). We use a testbed which includes 3G/HSPA, WLAN and WiMAX network accesses to evaluate our proposed architecture and present results that demonstrate its value in enhancing video delivery and minimizing service disruption in an involved scenario.  相似文献   
75.
Automated video analysis lacks reliability when searching for unknown events in video data. The practical approach is to watch all the recorded video data, if applicable in fast-forward mode. In this paper we present a method to adapt the playback velocity of the video to the temporal information density, so that the users can explore the video under controlled cognitive load. The proposed approach can cope with static changes and is robust to video noise. First, we formulate temporal information as symmetrized Rényi divergence, deriving this measure from signal coding theory. Further, we discuss the animated visualization of accelerated video sequences and propose a physiologically motivated blending approach to cope with arbitrary playback velocities. Finally, we compare the proposed method with the current approaches in this field by experiments and a qualitative user study, and show its advantages over motion-based measures.  相似文献   
76.
It is shown that the compressed word problem for an HNN-extension ??H,t?Ot ?1 at=?(a) (a??A)?? with A finite is polynomial time Turing-reducible to the compressed word problem for the base group H. An analogous result for amalgamated free products is shown as well.  相似文献   
77.
Higher‐order finite element methods have emerged as an important discretization scheme for simulation. They are increasingly used in contemporary numerical solvers, generating a new class of data that must be analyzed by scientists and engineers. Currently available visualization tools for this type of data are either batch oriented or limited to certain cell types and polynomial degrees. Other approaches approximate higher‐order data by resampling resulting in trade‐offs in interactivity and quality. To overcome these limitations, we have developed a distributed visualization system which allows for interactive exploration of non‐conforming unstructured grids, resulting from space‐time discontinuous Galerkin simulations, in which each cell has its own higher‐order polynomial solution. Our system employs GPU‐based raycasting for direct volume rendering of complex grids which feature non‐convex, curvilinear cells with varying polynomial degree. Frequency‐based adaptive sampling accounts for the high variations along rays. For distribution across a GPU cluster, the initial object‐space partitioning is determined by cell characteristics like the polynomial degree and is adapted at runtime by a load balancing mechanism. The performance and utility of our system is evaluated for different aeroacoustic simulations involving the propagation of shock fronts.  相似文献   
78.
We present a new approach for an average-case analysis of algorithms and data structures that supports a non-uniform distribution of the inputs and is based on the maximum likelihood training of stochastic grammars. The approach is exemplified by an analysis of the expected size of binary tries as well as by three sorting algorithms and it is compared to the known results that were obtained by traditional techniques. Investigating traditional settings like the random permutation model, we rediscover well-known results formerly derived by pure analytic methods; changing to biased data yields original results. All but one step of our analysis can be automated on top of a computer-algebra system. Thus our new approach can reduce the effort required for an average-case analysis, allowing for the consideration of realistic input distributions with unknown distribution functions at the same time. As a by-product, our approach yields an easy way to generate random combinatorial objects according to various probability distributions.  相似文献   
79.
80.
Powder injection molding is a preferred technology for the production of micro parts or microstructured parts. Derived from the well known thermoplastic injection molding technique it is suitable for a large-scale production of ceramic and metallic parts without final machining. To achieve good surface quality and control the part size and distortions is an important goal to allow mass production. This means that all process steps like part design adjusted for MIM/CIM-technology, appropriate choice of powder and binder components and injection molding simulation to design the sprue are required. Concerning the injection molding itself high quality mold inserts, high-precision injection molding with suitable molding machines like Battenfeld Microsystem50 or standard machine with special equipment like variotherm or evacuation of the molding tool and an adjusted debinding and sintering process have to be available. Results of producing micro parts by powder injection molding of ceramic feedstock will be presented.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号