首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1751篇
  免费   140篇
  国内免费   14篇
电工技术   44篇
综合类   8篇
化学工业   434篇
金属工艺   72篇
机械仪表   63篇
建筑科学   134篇
矿业工程   4篇
能源动力   125篇
轻工业   128篇
水利工程   28篇
石油天然气   17篇
无线电   171篇
一般工业技术   253篇
冶金工业   101篇
原子能技术   17篇
自动化技术   306篇
  2024年   7篇
  2023年   36篇
  2022年   56篇
  2021年   101篇
  2020年   82篇
  2019年   106篇
  2018年   147篇
  2017年   136篇
  2016年   154篇
  2015年   86篇
  2014年   114篇
  2013年   183篇
  2012年   123篇
  2011年   89篇
  2010年   81篇
  2009年   60篇
  2008年   49篇
  2007年   38篇
  2006年   18篇
  2005年   15篇
  2004年   13篇
  2003年   18篇
  2002年   16篇
  2001年   15篇
  2000年   14篇
  1999年   7篇
  1998年   19篇
  1997年   10篇
  1996年   6篇
  1995年   15篇
  1994年   12篇
  1993年   8篇
  1992年   10篇
  1991年   8篇
  1990年   8篇
  1989年   7篇
  1988年   7篇
  1987年   2篇
  1986年   6篇
  1985年   5篇
  1984年   1篇
  1983年   3篇
  1982年   5篇
  1979年   3篇
  1975年   1篇
  1973年   2篇
  1970年   2篇
  1969年   1篇
排序方式: 共有1905条查询结果,搜索用时 15 毫秒
111.
This article extends a hybrid evolutionary algorithm to cope with the feeder reconfiguration problem in distribution networks. The proposed method combines the Self-Adaptive Modified Particle Swarm Optimization (SAMPSO) with Modified Shuffled Frog Leaping Algorithm (MSFLA) to proceed toward the global solution. As with other population-based algorithms, PSO has parameters which should be tuned to have a suitable performance. Thus, a self-adaptive framework is proposed to adjust the parameters dynamically. In SAMPSO, the PSO learning factors are considered to be the new control variables and are changed in the evolutionary process. To enhance the quality of the solutions, the SAMPSO is combined with MSFLA and a new hybrid algorithm is proposed to minimize the electrical energy losses of the distribution system by feeder reconfiguration. The effectiveness of the proposed method is demonstrated through two test systems.  相似文献   
112.
High performance clusters, which are established by connecting many computing nodes together, are known as one of main architectures to obtain extremely high performance. Currently, these systems are moving from multi-core architectures to many-core architectures to enhance their computational capabilities. This trend would eventually cause network interfaces to be a performance bottleneck because these interfaces are few in number and cannot handle multiple network requests at a time. The consequence of such issue would be higher waiting time at the network interface queue and lower performance. In this paper, we tackle this problem by introducing a process mapping algorithm, which attempts to improve inter-node communications in multi-core clusters. Our mapping strategy reduces accesses to the network interface by distributing communication-intensive processes among computing nodes, which leads to lower waiting time at the network interface queue. Performance results for synthetic and real workloads reveal that the proposed strategy improves the performance from 8 % up to 90 % in tested cases compared to other methods.  相似文献   
113.
Multicast routing is a crucial issue in wireless networks in which the same content should be delivered to a group of recipients simultaneously. Multicast is also considered as a key service for audio and video applications as well as data dissemination protocols over the last-mile backhaul Internet connectivity provided by multi-channel multi-radio wireless mesh networks (MCMR WMNs). The multicast problem is essentially related to a channel assignment strategy which determines the most suitable channel-radio associations. However, channel assignment brings about its own complications and hence, solving the multicast problem in MCMR WMNs will be more complicated than that of traditional networks. This problem has been proved to be NP-hard. In the major prior art multicast protocols developed for these networks, channel assignment and multicast routing are considered as two separate sub-problems to be solved sequentially. The work in this article is targeted at promoting the adoption of learning automata for joint channel assignment and multicast routing problem in MCMR WMNs. In the proposed scheme named LAMR, contrary to the existing methods, these two sub-problems will be solved conjointly. Experimental results demonstrate that LAMR outperforms the LCA and MCM proposed by Zeng et al. (IEEE Trans. Parallel. Distrib. Syst. 21(1):86–99, 2010) as well as the genetic algorithm-, tabu search-, and simulated annealing-based methods by Cheng and Yang (Int. J. Appl. Soft Comput. 11(2):1953–1964, 2011) in terms of achieved throughput, end-to-end delay, average packet delivery ratio, and multicast tree total cost.  相似文献   
114.
A semantic social network-based expert recommender system   总被引:2,自引:2,他引:0  
This research work presents a framework to build a hybrid expert recommendation system that integrates the characteristics of content-based recommendation algorithms into a social network-based collaborative filtering system. The proposed method aims at improving the accuracy of recommendation prediction by considering the social aspect of experts’ behaviors. For this purpose, content-based profiles of experts are first constructed by crawling online resources. A semantic kernel is built by using the background knowledge derived from Wikipedia repository. The semantic kernel is employed to enrich the experts’ profiles. Experts’ social communities are detected by applying the social network analysis and using factors such as experience, background, knowledge level, and personal preferences. By this way, hidden social relationships can be discovered among individuals. Identifying communities is used for determining a particular member’s value according to the general pattern behavior of the community that the individual belongs to. Representative members of a community are then identified using the eigenvector centrality measure. Finally, a recommendation is made to relate an information item, for which a user is seeking an expert, to the representatives of the most relevant community. Such a semantic social network-based expert recommendation system can provide benefits to both experts and users if one looks at the recommendation from two perspectives. From the user’s perspective, she/he is provided with a group of experts who can help the user with her/his information needs. From the expert’s perspective she/he has been assigned to work on relevant information items that fall under her/his expertise and interests.  相似文献   
115.
Canal section design with minimum cost, which can be considered as an objective function, involves minimization of total costs per unit length of the canal, including direct costs of per cubic meter earthworks and per meter canal lining and indirect costs of water losses through canal seepage and evaporation. Since the costs (both direct and indirect) are associated with the canal geometry and dimensions, it is possible to lower them by optimization of the mentioned objective function. For this purpose, some constraints were subjected and considered to solve the problem. Flow discharge, as the main constraint, was considered in addition to the minimum permissible velocity and Froude’s number, as subsidiary constraints. MATLAB programming software was used to demonstrate and run the optimization algorithm. The results finally were illustrated in forms of dimensionless graphs, which simplify the optimum design of canal dimensions with minimum cost per meter length. Comparing the results with other similar studies, however show the importance and role of earthworks and lining costs, as well as including the subsidiary constraints in the optimization process.  相似文献   
116.
117.
ABSTRACT

The freely available global and near-global digital elevation models (DEMs) have shown great potential for various remote sensing applications. The Shuttle Radar Topography Mission (SRTM) data sets provide the near-global DEM of the Earth’s surface obtained using the interferometry synthetic aperture radar (InSAR). Although free accessibility and generality are the advantages of these data sets, many applications require more detailed and accurate DEMs. In this paper, we proposed a modified and advanced polarimetry-clinometry algorithm for improving SRTM topography model which requires only one set of polarimetric synthetic aperture radar (PolSAR) data. The azimuth and range slope components estimation based on polarization orientation angle (POA) shifts and the intensity-based Lambertian model formed the bases of the proposed method. This method initially compensated for the polarimetry topography effect corresponding to SRTM using the DEM-derived POA. In the second step, using a modified algorithm, POA was obtained from the compensated PolSAR data. The POA shifts by the azimuth and range slopes’ variations based on the polarimetric model. In addition to the polarimetric model, a clinometry model based on the Lambertian scattering model related to the terrain slope was employed. Next, two unknown parameters, i.e. azimuth and range slope values, were estimated in a system of equations by two models from the compensated PolSAR data. Azimuth and range slopes of SRTM were enhanced by PolSAR-derived slopes. Finally, a weighted least-square grid adjustment (WLSG) method was proposed to integrate the enhanced slopes’ map and estimate enhanced heights. The National Aeronautics and Space Administration Jet Propulsion Laboratory (NASA JPL) AIRSAR was utilized to illustrate the potential of the proposed method in SRTM enhancement. Also, the InSAR DEM was employed for evaluation experiments. Results showed that the accuracy of SRTM DEM is improved up to 2.91 m in comparison with InSAR DEM.  相似文献   
118.
Power efficiency is one of the main challenges in large-scale distributed systems such as datacenters, Grids, and Clouds. One can study the scheduling of applications in such large-scale distributed systems by representing applications as a set of precedence-constrained tasks and modeling them by a Directed Acyclic Graph. In this paper we address the problem of scheduling a set of tasks with precedence constraints on a heterogeneous set of Computing Resources (CRs) with the dual objective of minimizing the overall makespan and reducing the aggregate power consumption of CRs. Most of the related works in this area use Dynamic Voltage and Frequency Scaling (DVFS) approach to achieve these objectives. However, DVFS requires special hardware support that may not be available on all processors in large-scale distributed systems. In contrast, we propose a novel two-phase solution called PASTA that does not require any special hardware support. In its first phase, it uses a novel algorithm to select a subset of available CRs for running an application that can balance between lower overall power consumption of CRs and shorter makespan of application task schedules. In its second phase, it uses a low-complexity power-aware algorithm that creates a schedule for running application tasks on the selected CRs. We show that the overall time complexity of PASTA is $O(p.v^{2})$ where $p$ is the number of CRs and $v$ is the number of tasks. By using simulative experiments on real-world task graphs, we show that the makespan of schedules produced by PASTA are approximately 20 % longer than the ones produced by the well-known HEFT algorithm. However, the schedules produced by PASTA consume nearly 60 % less energy than those produced by HEFT. Empirical experiments on a physical test-bed confirm the power efficiency of PASTA in comparison with HEFT too.  相似文献   
119.
In this work, Functional Fe3O4@ polydopamine nanocomposite (Fe3O4@PDA) with magnetic response and special surface area were successfully assembled utilizing the strong coordination interactions between these two versatile materials. The morphology and size, crystal structure, specific saturation magnetization, chemical structure, and thermal properties were characterized by transmission electron microscopy (TEM), X‐ray diffraction (XRD), vibration magnetometer (VSM), point of zero charge (pHpzc), Fourier infrared (FT‐IR) and thermogravimetric analysis (TGA). The self‐polymerization of dopamine could be completed within 3 days, and Fe3O4 nanoparticles were embedded into PDA polymer. TGA results showed that PDA content of nanocomposite can be up to 51.7 wt% and also showed a significant decrease in the decomposition temperature of PDA from 530 to 270°C in the presence of the Fe3O4 nanoparticles. Through TGA analysis the coating thickness was estimated to be about 0.86 nm that it is well coincident with the measured values using TEM images and XRD analysis. At room temperature by vibrating sample magnetometer (VSM), Fe3O4 and Fe3O4@PDA exhibit superparamagnetic behavior with a saturation moment of 57.87 and 44.7 emu/g, respectively. Furthermore, PZC value reduced for Fe3O4@PDA compared with Fe3O4 nanoparticles and fell from 6.7 to 3.04. J. VINYL ADDIT. TECHNOL., 25:41–47, 2019. © 2018 Society of Plastics Engineers  相似文献   
120.
The essence of fractal image denoising is to predict the fractal code of a noiseless image from its noisy observation. From the predicted fractal code, one can generate an estimate of the original image. We show how well fractal-wavelet denoising predicts parent wavelet subtress of the noiseless image. The performance of various fractal-wavelet denoising schemes (e.g., fixed partitioning, quadtree partitioning) is compared to that of some standard wavelet thresholding methods. We also examine the use of cycle spinning in fractal-based image denoising for the purpose enhancing the denoised estimates. Our experimental results show that these fractal-based image denoising methods are quite competitive with standard wavelet thresholding methods for image denoising. Finally, we compare the performance of the pixel- and wavelet-based fractal denoising schemes.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号