首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   547篇
  免费   12篇
电工技术   5篇
化学工业   98篇
机械仪表   3篇
建筑科学   9篇
能源动力   19篇
轻工业   21篇
水利工程   4篇
无线电   91篇
一般工业技术   61篇
冶金工业   15篇
原子能技术   1篇
自动化技术   232篇
  2024年   1篇
  2023年   3篇
  2022年   4篇
  2021年   8篇
  2020年   3篇
  2019年   6篇
  2018年   12篇
  2017年   7篇
  2016年   22篇
  2015年   14篇
  2014年   26篇
  2013年   31篇
  2012年   35篇
  2011年   50篇
  2010年   40篇
  2009年   42篇
  2008年   38篇
  2007年   31篇
  2006年   31篇
  2005年   19篇
  2004年   16篇
  2003年   17篇
  2002年   18篇
  2001年   10篇
  2000年   8篇
  1999年   6篇
  1998年   8篇
  1997年   5篇
  1996年   2篇
  1995年   1篇
  1994年   3篇
  1993年   2篇
  1992年   2篇
  1991年   5篇
  1990年   2篇
  1989年   7篇
  1987年   2篇
  1986年   1篇
  1985年   3篇
  1984年   3篇
  1983年   1篇
  1981年   2篇
  1979年   3篇
  1978年   1篇
  1977年   3篇
  1975年   3篇
  1973年   2篇
排序方式: 共有559条查询结果,搜索用时 0 毫秒
21.
Performance-Based Design (PBD) methodologies is the contemporary trend in designing better and more economic earthquake-resistant structures where the main objective is to achieve more predictable and reliable levels of safety and operability against natural hazards. On the other hand, reliability-based optimization (RBO) methods directly account for the variability of the design parameters into the formulation of the optimization problem. The objective of this work is to incorporate PBD methodologies under seismic loading into the framework of RBO in conjunction with innovative tools for treating computational intensive problems of real-world structural systems. Two types of random variables are considered: Those which influence the level of seismic demand and those that affect the structural capacity. Reliability analysis is required for the assessment of the probabilistic constraints within the RBO formulation. The Monte Carlo Simulation (MCS) method is considered as the most reliable method for estimating the probabilities of exceedance or other statistical quantities albeit with excessive, in many cases, computational cost. First or Second Order Reliability Methods (FORM, SORM) constitute alternative approaches which require an explicit limit-state function. This type of limit-state function is not available for complex problems. In this study, in order to find the most efficient methodology for performing reliability analysis in conjunction with performance-based optimum design under seismic loading, a Neural Network approximation of the limit-state function is proposed and is combined with either MCS or with FORM approaches for handling the uncertainties. These two methodologies are applied in RBO problems with sizing and topology design variables resulting in two orders of magnitude reduction of the computational effort.  相似文献   
22.
We consider how doping can be described in terms of the charge-transfer insulator concept. We discuss and compare a few models for the band structure for the doped charges. This has led us to the conclusion that the band structure stability problem is one of the main issues in any correspondence between results for thet-J model and, say, the three-band model for the slightly doped layered oxides. The stability criterion is formulated and its implications discussed. Provided a phenomenological conduction band is chosen to satisfy the criterion of stability, a detailed picture of how dopants influence the spin wave spectrum atT=0 is presented. The basic physics for the destruction of the antiferromagnetic (AF) long-range order is rather model-independent: the long-range order (atT=0) disappears due to the Cerenkov effect when the Fermi velocity first exceeds the spin wave velocity. We then discuss the overall spectrum of spin excitations and see that the spin wave attenuation for x<x c,T= 0 due to Landau damping appears in the range of magnon momentak(x)=2m * s±x. We also argue that in the presence of superconductivity, the Cerenkov effect is eliminated due to the gap in the spectrum. This may restore the role of the AF fluctuations as the main source of dissipation at the lowest temperatures. A brief discussion of how interaction with magnons may affect the hole spectrum concludes the paper.  相似文献   
23.
In this paper, we strive towards the development of efficient techniques in order to segment document pages resulting from the digitization of historical machine-printed sources. This kind of documents often suffer from low quality and local skew, several degradations due to the old printing matrix quality or ink diffusion, and exhibit complex and dense layout. To face these problems, we introduce the following innovative aspects: (i) use of a novel Adaptive Run Length Smoothing Algorithm (ARLSA) in order to face the problem of complex and dense document layout, (ii) detection of noisy areas and punctuation marks that are usual in historical machine-printed documents, (iii) detection of possible obstacles formed from background areas in order to separate neighboring text columns or text lines, and (iv) use of skeleton segmentation paths in order to isolate possible connected characters. Comparative experiments using several historical machine-printed documents prove the efficiency of the proposed technique.  相似文献   
24.
Data recorded from multiple sources sometimes exhibit non-instantaneous couplings. For simple data sets, cross-correlograms may reveal the coupling dynamics. But when dealing with high-dimensional multivariate data there is no such measure as the cross-correlogram. We propose a simple algorithm based on Kernel Canonical Correlation Analysis (kCCA) that computes a multivariate temporal filter which links one data modality to another one. The filters can be used to compute a multivariate extension of the cross-correlogram, the canonical correlogram, between data sources that have different dimensionalities and temporal resolutions. The canonical correlogram reflects the coupling dynamics between the two sources. The temporal filter reveals which features in the data give rise to these couplings and when they do so. We present results from simulations and neuroscientific experiments showing that tkCCA yields easily interpretable temporal filters and correlograms. In the experiments, we simultaneously performed electrode recordings and functional magnetic resonance imaging (fMRI) in primary visual cortex of the non-human primate. While electrode recordings reflect brain activity directly, fMRI provides only an indirect view of neural activity via the Blood Oxygen Level Dependent (BOLD) response. Thus it is crucial for our understanding and the interpretation of fMRI signals in general to relate them to direct measures of neural activity acquired with electrodes. The results computed by tkCCA confirm recent models of the hemodynamic response to neural activity and allow for a more detailed analysis of neurovascular coupling dynamics.  相似文献   
25.
This article focuses on the optimization of PCDM, a parallel, two-dimensional (2D) Delaunay mesh generation application, and its interaction with parallel architectures based on simultaneous multithreading (SMT) processors. We first present the step-by-step effect of a series of optimizations on performance. These optimizations improve the performance of PCDM by up to a factor of six. They target issues that very often limit the performance of scientific computing codes. We then evaluate the interaction of PCDM with a real SMT-based SMP system, using both high-level metrics, such as execution time, and low-level information from hardware performance counters.  相似文献   
26.
This paper describes the architecture and implementation of a distributed autonomous gardening system with applications in urban/indoor precision agriculture. The garden is a mesh network of robots and plants. The gardening robots are mobile manipulators with an eye-in-hand camera. They are capable of locating plants in the garden, watering them, and locating and grasping fruit. The plants are potted cherry tomatoes enhanced with sensors and computation to monitor their well-being (e.g. soil humidity, state of fruits) and with networking to communicate servicing requests to the robots. By embedding sensing, computation, and communication into the pots, task allocation in the system is de-centrally coordinated, which makes the system scalable and robust against the failure of a centralized agent. We describe the architecture of this system and present experimental results for navigation, object recognition, and manipulation as well as challenges that lie ahead toward autonomous precision agriculture with multi-robot teams.  相似文献   
27.
Dimitris  Nikos  Costas   《Computers & Security》2009,28(7):578-591
Any application or service utilizing the Internet is exposed to both general Internet attacks and other specific ones. Most of the times the latter are exploiting a vulnerability or misconfiguration in the provided service and/or in the utilized protocol itself. Consequently, the employment of critical services, like Voice over IP (VoIP) services, over the Internet is vulnerable to such attacks and, on top of that, they offer a field for new attacks or variations of existing ones. Among the various threats–attacks that a service provider should consider are the flooding attacks, at the signaling level, which are very similar to those against TCP servers but have emerged at the application level of the Internet architecture. This paper examines flooding attacks against VoIP architectures that employ the Session Initiation Protocol (SIP) as their signaling protocol. The focus is on the design and implementation of the appropriate detection method. Specifically, a bloom filter based monitor is presented and a new metric, named session distance, is introduced in order to provide an effective protection scheme against flooding attacks. The proposed scheme is evaluated through experimental test bed architecture under different scenarios. The results of the evaluation demonstrate that the required time to detect such an attack is negligible and also that the number of false alarms is close to zero.  相似文献   
28.
In a cloud computing paradigm, energy efficient allocation of different virtualized ICT resources (servers, storage disks, and networks, and the like) is a complex problem due to the presence of heterogeneous application (e.g., content delivery networks, MapReduce, web applications, and the like) workloads having contentious allocation requirements in terms of ICT resource capacities (e.g., network bandwidth, processing speed, response time, etc.). Several recent papers have tried to address the issue of improving energy efficiency in allocating cloud resources to applications with varying degree of success. However, to the best of our knowledge there is no published literature on this subject that clearly articulates the research problem and provides research taxonomy for succinct classification of existing techniques. Hence, the main aim of this paper is to identify open challenges associated with energy efficient resource allocation. In this regard, the study, first, outlines the problem and existing hardware and software-based techniques available for this purpose. Furthermore, available techniques already presented in the literature are summarized based on the energy-efficient research dimension taxonomy. The advantages and disadvantages of the existing techniques are comprehensively analyzed against the proposed research dimension taxonomy namely: resource adaption policy, objective function, allocation method, allocation operation, and interoperability.  相似文献   
29.
An analysis of selected spatiotemporal characteristics of isolated thunderstorms in relation to cloud-to-ground (CG) lightning over part of the eastern Mediterranean is performed. The purpose of the study is twofold: to better understand and improve the basic knowledge of the physical mechanisms of the phenomenon and to offer new means of nowcasting the lightning activity in such thunderstorms. Meteosat Second Generation (MSG) Rapid Scan Service (RSS) infrared imagery which offers the option of tracking the examined storms with a time frequency of 5 minutes is one of the two utilized datasets, the other being ZEUS very low frequency (VLF) lightning detection system’s CG discharge data. It was shown that a cloud top temperature of about ?20°C is required for the onset of lightning activity. A rapid drop of the cloud-top temperature of about 11°C in 5 minutes on average is observed a few minutes before or during lightning initiation. The maximization of the activity is usually quite close to the overall minimum cloud-top temperature of the cell. A temperature increase of 3.5°C from this overall minimum can mark the end of the activity, which is also associated to the time evolution of the cell’s horizontal extent. In fact, after the cell’s horizontal area stops to increase and/or starts to gradually diminish, CG lightning activity is expected to stop.  相似文献   
30.
The problem of channel sharing by rate adaptive streams belonging to various classes is considered. Rate adaptation provides the opportunity for accepting more connections by adapting the bandwidth of connections that are already in the system. However, bandwidth adaptation must be employed in a careful manner in order to ensure that (a) bandwidth is allocated to various classes in a fair manner (system perspective) and (b) bandwidth adaptation does not affect adversely the perceived user quality of the connection (user quality). The system perspective aspect has been studied earlier. This paper focuses on the equally important user perspective. It is proposed to quantify user Quality of Service (QoS) through measures capturing short and long-term bandwidth fluctuations that can be implemented with the mechanisms of traffic regulators, widely used in networking for the purpose of controlling the traffic entering or exiting a network node. Furthermore, it is indicated how to integrate the user perspective metrics with the optimal algorithms for system performance metrics developed earlier by the authors. Simulation results illustrate the effectiveness of the proposed framework.
Leonidas GeorgiadisEmail:

Nikos G. Argiriou   received the Diploma degree in Electrical Engineering from the Department of Electrical Engineering, Telecommunication Division, Aristotle University of Thessaloniki, Greece, in 1996. He worked as a researcher, on secure medical image transmission over networks, at the Image Processing Lab at the same university during 1996–1997. During 1998–2000 he was a researcher for the European Project Esprit Catserver concerning the use of advanced Quality of Service techniques in CATV networks. He received his Ph.D. degree at Aristotle University of Thessaloniki in 2007. His current research interests are in the development and implementation of QoS techniques for wired and wireless networks. Leonidas Georgiadis   received the Diploma degree in Electrical Engineering from Aristotle University, Thessaloniki, Greece, in 1979, and his M.S. and Ph.D degrees both in Electrical Engineering from the University of Connecticut, in 1981 and 1986, respectively. From 1986 to 1987 he was Research Assistant Professor at the University of Virginia, Charlottesville. In 1987 he joined IBM T. J. Watson Research Center, Yorktown Heights as a Research Staff Member. Since October 1995, he has been with the Telecommunications Department of Aristotle University, Thessaloniki, Greece. His interests are in the area of wireless networks, high speed networks, routing, scheduling, congestion control, modeling and performance analysis.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号