首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   690篇
  免费   42篇
  国内免费   9篇
电工技术   15篇
综合类   4篇
化学工业   143篇
金属工艺   16篇
机械仪表   22篇
建筑科学   36篇
矿业工程   2篇
能源动力   49篇
轻工业   34篇
水利工程   5篇
石油天然气   23篇
无线电   110篇
一般工业技术   121篇
冶金工业   27篇
原子能技术   4篇
自动化技术   130篇
  2024年   3篇
  2023年   30篇
  2022年   32篇
  2021年   57篇
  2020年   42篇
  2019年   34篇
  2018年   52篇
  2017年   38篇
  2016年   45篇
  2015年   32篇
  2014年   34篇
  2013年   61篇
  2012年   37篇
  2011年   51篇
  2010年   35篇
  2009年   27篇
  2008年   22篇
  2007年   17篇
  2006年   20篇
  2005年   8篇
  2004年   12篇
  2003年   12篇
  2002年   8篇
  2001年   8篇
  2000年   2篇
  1999年   5篇
  1998年   1篇
  1996年   2篇
  1995年   3篇
  1994年   2篇
  1993年   3篇
  1992年   2篇
  1987年   1篇
  1978年   1篇
  1976年   1篇
  1975年   1篇
排序方式: 共有741条查询结果,搜索用时 15 毫秒
1.
In this paper, a generalized multiple-input multiple-output (MIMO) antenna system that can be fitted to the uplink of a wireless communication system is considered for the general case of multi-user. At the transmitter, the information bits are Turbo coded, then interleaved and passed through a serial-to-parallel converter. The channel is assumed bad urban suffering from multipath Rayleigh fading resulting in inter-symbol and multiple access interferences (ISI and MAI). At the front-end of the receiver, a number of receiving antennas are used followed by a joint multi-user estimator based on the Minimum Mean Square Error Block Linear Equalizer (MMSE-BLE).Computer simulations demonstrate a significant performance improvement in both single user and multi-user cases.This paper depends in parts on that presented at the 11th European Wireless Conference, Cyprus, Nicosia, pp. 187–192, April 2005. Yasmine A. Fahmy was born in Guiza, Egypt, on June 4, 1976. She received the B.Sc., M.Sc. and Ph.D. degrees in Communication and Electronics engineering from Cairo University, Egypt on 1999, 2001 and 2005 respectively. She is presently an assistant professor at Cairo University, Egypt. Her current field of interest is wireless communication and channel estimation. Hebat-Allah M. Mourad received her B.Sc., M. Sc. and Ph.D. degrees in electrical communication engineering from Cairo University, Egypt, in 1983, 1987 and 1994 respectively. Since 1983, she has been with the Department of Electronics and Communications, Faculty of Engineering, Cairo University, and is currently associate professor there. Her research interests include optical fiber communications, mobile and satellite communications. Emad K. Al-Hussaini received his B.Sc degree in Electrical Communication Engineering from Ain-Shams University, Cairo, Egypt, in 1964 and his M.Sc and Ph.D. degrees from Cairo University, Giza, Egypt, in 1974 and 1977, respectively. From 1964 to 1970, he was with the General Egyptian Aeroorganization. Since 1970, he has been with the Department of Electronics and Communications, Faculty of Engineering, Cairo University, and is currently professor there. He was a research fellow at Imperial College, London, UK, and at the Moore School of Electrical Engineering, University of Pennsylvania, Philadelphia, PA, USA, in the academic years 1976/1977 and 1981/1982, respectively. In 1990, he received the Egyptian national encouragement award for outstanding engineering research. He has written several papers for technical international journals and conferences. His research interests include signal processing, fading channel communication, modulation, and cellular mobile radio systems. Dr Al-Hussaini is a senior member of IEEE. He is listed in Marquis Whos Who in the World and in the IBC (International Biographical Center, Cambridge) for outstanding people of the 20th century.  相似文献   
2.
Conical magnetic bearings with radial and thrust control   总被引:1,自引:0,他引:1  
Conical magnetic bearings with radial and thrust (axial) control are discussed. First a model of the conical bearing is made in state variable form. Airgap flux, airgap displacement, and velocity are used as state variables. A control method based on recent developments in control theory is proposed. The method is called Q-parameterization theory and is used to design a stabilizing controller for the system which, without control, is unstable in nature. The controller parameters are chosen using console (a software tandem for interactive optimization-based design) so that all design requirements are satisfied. Digital simulation was used to verify the proposed control method. Transient responses, forced responses in all three directions (vertical, horizontal, and axial), and the interactions between them are obtained. The results obtained are satisfactory and suggest the robustness of the proposed technique  相似文献   
3.
In this paper, a five-level cascaded H-bridge multilevel inverters topology is applied on induction motor control known as direct torque control (DTC) strategy. More inverter states can be generated by a five-level inverter which improves voltage selection capability. This paper also introduces two different control methods to select the appropriate output voltage vector for reducing the torque and flux error to zero. The first is based on the conventional DTC scheme using a pair of hysteresis comparators and look up table to select the output voltage vector for controlling the torque and flux. The second is based on a new fuzzy logic controller using Sugeno as the inference method to select the output voltage vector by replacing the hysteresis comparators and lookup table in the conventional DTC, to which the results show more reduction in torque ripple and feasibility of smooth stator current. By using Matlab/Simulink, it is verified that using five-level inverter in DTC drive can reduce the torque ripple in comparison with conventional DTC, and further torque ripple reduction is obtained by applying fuzzy logic controller. The simulation results have also verified that using a fuzzy controller instead of a hysteresis controller has resulted in reduction in the flux ripples significantly as well as reduces the total harmonic distortion of the stator current to below 4 %.  相似文献   
4.
5.
As software systems continue to play an important role in our daily lives, their quality is of paramount importance. Therefore, a plethora of prior research has focused on predicting components of software that are defect-prone. One aspect of this research focuses on predicting software changes that are fix-inducing. Although the prior research on fix-inducing changes has many advantages in terms of highly accurate results, it has one main drawback: It gives the same level of impact to all fix-inducing changes. We argue that treating all fix-inducing changes the same is not ideal, since a small typo in a change is easier to address by a developer than a thread synchronization issue. Therefore, in this paper, we study high impact fix-inducing changes (HIFCs). Since the impact of a change can be measured in different ways, we first propose a measure of impact of the fix-inducing changes, which takes into account the implementation work that needs to be done by developers in later (fixing) changes. Our measure of impact for a fix-inducing change uses the amount of churn, the number of files and the number of subsystems modified by developers during an associated fix of the fix-inducing change. We perform our study using six large open source projects to build specialized models that identify HIFCs, determine the best indicators of HIFCs and examine the benefits of prioritizing HIFCs. Using change factors, we are able to predict 56 % to 77 % of HIFCs with an average false alarm (misclassification) rate of 16 %. We find that the lines of code added, the number of developers who worked on a change, and the number of prior modifications on the files modified during a change are the best indicators of HIFCs. Lastly, we observe that a specialized model for HIFCs can provide inspection effort savings of 4 % over the state-of-the-art models. We believe our results would help practitioners prioritize their efforts towards the most impactful fix-inducing changes and save inspection effort.  相似文献   
6.
Bug fixing accounts for a large amount of the software maintenance resources. Generally, bugs are reported, fixed, verified and closed. However, in some cases bugs have to be re-opened. Re-opened bugs increase maintenance costs, degrade the overall user-perceived quality of the software and lead to unnecessary rework by busy practitioners. In this paper, we study and predict re-opened bugs through a case study on three large open source projects—namely Eclipse, Apache and OpenOffice. We structure our study along four dimensions: (1) the work habits dimension (e.g., the weekday on which the bug was initially closed), (2) the bug report dimension (e.g., the component in which the bug was found) (3) the bug fix dimension (e.g., the amount of time it took to perform the initial fix) and (4) the team dimension (e.g., the experience of the bug fixer). We build decision trees using the aforementioned factors that aim to predict re-opened bugs. We perform top node analysis to determine which factors are the most important indicators of whether or not a bug will be re-opened. Our study shows that the comment text and last status of the bug when it is initially closed are the most important factors related to whether or not a bug will be re-opened. Using a combination of these dimensions, we can build explainable prediction models that can achieve a precision between 52.1–78.6 % and a recall in the range of 70.5–94.1 % when predicting whether a bug will be re-opened. We find that the factors that best indicate which bugs might be re-opened vary based on the project. The comment text is the most important factor for the Eclipse and OpenOffice projects, while the last status is the most important one for Apache. These factors should be closely examined in order to reduce maintenance cost due to re-opened bugs.  相似文献   
7.

The Peer to Peer-Cloud (P2P-Cloud) is a suitable alternative to distributed cloud-based or peer-to-peer (P2P)-based content on a large scale. The P2P-Cloud is used in many applications such as IPTV, Video-On-Demand, and so on. In the P2P-Cloud network, overload is a common problem during overcrowds. If a node receives many requests simultaneously, the node may not be able to respond quickly to user requests, and this access latency in P2P-Cloud networks is a major problem for their users. The replication method in P2P-Cloud environments reduces the time to access and uses network bandwidth by making multiple data copies in diverse locations. The replication improves access to the information and increases the reliability of the system. The data replication's main problem is identifying the best possible placement of replica data nodes based on user requests for data access time and an NP-hard optimization problem. This paper proposes a new replica replacement to improve average access time and replica cost using fuzzy logic and Ant Colony Optimization algorithm. Ants can find the shortest path to discover the optimal node to place the duplicate file with the least access time latency. The fuzzy module evaluates the historical information of each node to analyze the pheromone value per iteration. The fuzzy membership function is also used to determine each node's degree based on the four characteristics. The simulation results showed that the access time and replica cost are improved compared to other replica replacement algorithms.

  相似文献   
8.
The quality of the interfacial bonding between asphalt binder and aggregates plays a significant role in determining the durability of asphalt mixtures. Warm mix asphalt (WMA) modifiers have been used extensively in the last decade primarily to reduce production and compaction temperatures as well as to improve workability of asphalt mixtures. This study aimed to provide better understanding of the effects of these WMA modifiers on the interfacial bonding between asphalt binders and aggregates. The evaluation focused on measuring surface energy of binders in unaged and aged states and aggregates and then calculating energy parameters that describe the potential of a given asphalt-aggregate combination to resist fatigue cracking and moisture damage. Results show that the combination of asphalt-WMA additive, as well as the content applied of WMA additive has a significant impact on the fatigue cracking and moisture damage resistance. The results suggest that it is poor practice to use a given type and percentage of WMA modifier without regard for binder type. Instead, test methods are recommended to evaluate the compatibility of asphalt binder, WMA additive type/content, and aggregates for improved performance at different conditions.  相似文献   
9.
The use of colloidal quantum dots (CQDs) as a gain medium in infrared laser devices has been underpinned by the need for high pumping intensities, very short gain lifetimes, and low gain coefficients. Here, PbS/PbSSe core/alloyed-shell CQDs are employed as an infrared gain medium that results in highly suppressed Auger recombination with a lifetime of 485 ps, lowering the amplified spontaneous emission (ASE) threshold down to 300 µJ cm−2, and showing a record high net modal gain coefficient of 2180 cm−1. By doping these engineered core/shell CQDs up to nearly filling the first excited state, a significant reduction of optical gain threshold is demonstrated, measured by transient absorption, to an average-exciton population-per-dot 〈Nthg of 0.45 due to bleaching of the ground state absorption. This in turn have led to a fivefold reduction in ASE threshold at 〈NthASE = 0.70 excitons-per-dot, associated with a gain lifetime of 280 ps. Finally, these heterostructured QDs are used to achieve near-infrared lasing at 1670 nm at a pump fluences corresponding to sub-single-exciton-per-dot threshold (〈NthLas = 0.87). This work brings infrared CQD lasing thresholds on par to their visible counterparts, and paves the way toward solution-processed infrared laser diodes.  相似文献   
10.
Peer-to-peer (P2P) networks are beginning to form the infrastructure of future applications. Computers are organized in P2P overlay networks to facilitate search queries with reasonable cost. So, scalability is a major aim in design of P2P networks. In this paper, to obtain a high factor of scalability, we partition network search space using a consistent static shared upper ontology. We name our approach semantic partition tree (SPT). All resources and queries are annotated using the upper ontology and queries are semantically routed in the overlay network. Also, each node indexes addresses of other nodes that possess contents expressible by the concept it maintains. So, our approach can be conceived as an ontology-based distributed hash table (DHT). Also, we introduce a lookup service for the network which is very scalable and independent of the network size and just depends on depth of the ontology tree. Further, we introduce a broadcast algorithm on the network. We present worst case analysis of both lookup and broadcast algorithms and measure their performance using simulation. The results show that our scheme is highly scalable and can be used in real P2P applications.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号