首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1136篇
  免费   62篇
电工技术   13篇
综合类   1篇
化学工业   266篇
金属工艺   3篇
机械仪表   12篇
建筑科学   35篇
矿业工程   2篇
能源动力   39篇
轻工业   94篇
水利工程   11篇
石油天然气   5篇
无线电   182篇
一般工业技术   141篇
冶金工业   34篇
原子能技术   5篇
自动化技术   355篇
  2024年   2篇
  2023年   11篇
  2022年   29篇
  2021年   50篇
  2020年   43篇
  2019年   42篇
  2018年   37篇
  2017年   47篇
  2016年   42篇
  2015年   33篇
  2014年   60篇
  2013年   91篇
  2012年   73篇
  2011年   86篇
  2010年   62篇
  2009年   84篇
  2008年   74篇
  2007年   72篇
  2006年   37篇
  2005年   23篇
  2004年   27篇
  2003年   20篇
  2002年   17篇
  2001年   6篇
  2000年   7篇
  1999年   5篇
  1998年   10篇
  1997年   6篇
  1996年   9篇
  1995年   4篇
  1994年   10篇
  1993年   5篇
  1992年   5篇
  1991年   4篇
  1990年   4篇
  1989年   8篇
  1988年   7篇
  1987年   7篇
  1986年   5篇
  1985年   8篇
  1984年   5篇
  1983年   4篇
  1982年   3篇
  1981年   4篇
  1980年   3篇
  1979年   3篇
  1978年   1篇
  1977年   2篇
  1973年   1篇
排序方式: 共有1198条查询结果,搜索用时 562 毫秒
41.
Advances on sensor technology, wireless environments and data mining introduce new possibilities in the healthcare sector, realizing the anytime-anywhere access to medical information. Towards this direction, integration of packet-switched networks and sensor devices can be effective in deploying assistive environments, such as home monitoring for elderly or patients. In this paper we describe a policy-based architecture that utilizes wireless sensor devices, advanced network topologies and software agents to enable remote monitoring of patients and elderly people; through the aforementioned technologies we achieve continuous monitoring of a patient’s condition and we can proceed when necessary with proper actions. We also present a software framework and network architecture that realizes the provision of remote medical services, in compliance with the imposed security and privacy requirements. A proof of concept prototype is also deployed, along with an evaluation of the overall architecture’s performance.  相似文献   
42.
Contemporary distributed systems usually involve the spreading of information by means of ad-hoc dialogs between nodes (peers). This paradigm resembles the spreading of a virus in the biological perspective (epidemics). Such abstraction allows us to design and implement information dissemination schemes with increased efficiency. In addition, elementary information generated at a certain node can be further processed to obtain more specific, higher-level and more valuable information. Such information carries specific semantic value that can be further interpreted and exploited throughout the network. This is also reflected in the epidemical framework through the idea of virus transmutation which is a key component in our model. We establish an analytical framework for the study of a multi-epidemical information dissemination scheme in which diverse ‘transmuted epidemics’ are spread. We validate our analytical model through simulations. Key outcomes of this study include the assessment of the efficiency of the proposed scheme and the prediction of the characteristics of the spreading process (multi-epidemical prevalence and decay).  相似文献   
43.
Stochastic Flow Models (SFMs) are stochastic hybrid systems that abstract the dynamics of many complex discrete event systems and provide the basis for their control and optimization. SFMs have been used to date to study systems with a single user class or some multiclass settings in which performance metrics are not class-dependent. In this paper, we develop a SFM framework for multiple classes and class-dependent performance objectives, where competing classes employ threshold control policies and service is provided on a First Come First Serve (FCFS) basis. In this framework, we analyze new phenomena that result from the interaction of the different classes and give rise to a new class of “induced” events that capture delays in the SFM dynamics. We derive Infinitesimal Perturbation Analysis (IPA) estimators for derivatives of various class-dependent objectives, and use them as the basis for on-line optimization algorithms that apply to the underlying discrete event system (not the SFM). This allows us to contrast system-centric and user-centric objectives, thus putting the resource contention problem in a game framework. The unbiasedness of IPA estimators is established and numerical results are provided to illustrate the effectiveness of our method for the case where there are no constraints on the controllable thresholds and to demonstrate the gap between the results of system-centric optimization and user-centric optimization.  相似文献   
44.
This article focuses on the optimization of PCDM, a parallel, two-dimensional (2D) Delaunay mesh generation application, and its interaction with parallel architectures based on simultaneous multithreading (SMT) processors. We first present the step-by-step effect of a series of optimizations on performance. These optimizations improve the performance of PCDM by up to a factor of six. They target issues that very often limit the performance of scientific computing codes. We then evaluate the interaction of PCDM with a real SMT-based SMP system, using both high-level metrics, such as execution time, and low-level information from hardware performance counters.  相似文献   
45.
The advent of the World Wide Web has made an enormous amount of information available to everyone and the widespread use of digital equipment enables end-users (peers) to produce their own digital content. This vast amount of information requires scalable data management systems. Peer-to-peer (P2P) systems have so far been well established in several application areas, with file-sharing being the most prominent. The next challenge that needs to be addressed is (more complex) data sharing, management and query processing, thus facilitating the delivery of a wide spectrum of novel data-centric applications to the end-user, while providing high Quality-of-Service. In this paper, we propose a self-organizing P2P system that is capable to identify peers with similar content and intentionally assign them to the same super-peer. During content retrieval, fewer super-peers need to be contacted and therefore efficient similarity search is supported, in terms of reduced network traffic and contacted peers. Our approach increases the responsiveness and reliability of a P2P system and we demonstrate the advantages of our approach using large-scale simulations.  相似文献   
46.
47.
48.
Data stream values are often associated with multiple aspects. For example each value observed at a given time-stamp from environmental sensors may have an associated type (e.g., temperature, humidity, etc.) as well as location. Time-stamp, type and location are the three aspects, which can be modeled using a tensor (high-order array). However, the time aspect is special, with a natural ordering, and with successive time-ticks having usually correlated values. Standard multiway analysis ignores this structure. To capture it, we propose 2 Heads Tensor Analysis (2-heads), which provides a qualitatively different treatment on time. Unlike most existing approaches that use a PCA-like summarization scheme for all aspects, 2-heads treats the time aspect carefully. 2-heads combines the power of classic multilinear analysis with wavelets, leading to a powerful mining tool. Furthermore, 2-heads has several other advantages as well: (a) it can be computed incrementally in a streaming fashion, (b) it has a provable error guarantee and, (c) it achieves significant compression ratio against competitors. Finally, we show experiments on real datasets, and we illustrate how 2-heads reveals interesting trends in the data. This is an extended abstract of an article published in the Data Mining and Knowledge Discovery journal.  相似文献   
49.
Content distribution networks (CDNs) improve scalability and reliability, by replicating content to the “edge” of the Internet. Apart from the pure networking issues of the CDNs relevant to the establishment of the infrastructure, some very crucial data management issues must be resolved to exploit the full potential of CDNs to reduce the “last mile” latencies. A very important issue is the selection of the content to be prefetched to the CDN servers. All the approaches developed so far, assume the existence of adequate content popularity statistics to drive the prefetch decisions. Such information though, is not always available, or it is extremely volatile, turning such methods problematic. To address this issue, we develop self-adaptive techniques to select the outsourced content in a CDN infrastructure, which requires no apriori knowledge of request statistics. We identify clusters of “correlated” Web pages in a site, called Web site communities, and make these communities the basic outsourcing unit. Through a detailed simulation environment, using both real and synthetic data, we show that the proposed techniques are very robust and effective in reducing the user-perceived latency, performing very close to an unfeasible, off-line policy, which has full knowledge of the content popularity.  相似文献   
50.
The paper presents computer-aided methods that allocate a dental implant and suggest its size, during the pre-operative planning stage, in conformance with introduced optimization criteria and established clinical requirements. Based on computed tomography data of the jaw and prosthesis anatomy, single tooth cases are planned for the best-suited implant insertion at a user-defined region. An optimum implantation axis line is produced and cylindrical implants of various candidate sizes are then automatically positioned, while their occlusal end is leveled to bone ridge, and evaluated. Radial safety margins are used for the assessment of the implant safety distance from neighboring anatomical structures and bone quantity and quality are estimated and taken into consideration. A case study demonstrates the concept and allows for its discussion.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号