首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   7168篇
  免费   278篇
  国内免费   53篇
电工技术   78篇
综合类   14篇
化学工业   1159篇
金属工艺   127篇
机械仪表   165篇
建筑科学   121篇
矿业工程   29篇
能源动力   295篇
轻工业   468篇
水利工程   59篇
石油天然气   43篇
无线电   635篇
一般工业技术   1268篇
冶金工业   2129篇
原子能技术   66篇
自动化技术   843篇
  2024年   24篇
  2023年   117篇
  2022年   288篇
  2021年   437篇
  2020年   263篇
  2019年   295篇
  2018年   312篇
  2017年   269篇
  2016年   261篇
  2015年   176篇
  2014年   221篇
  2013年   434篇
  2012年   258篇
  2011年   284篇
  2010年   190篇
  2009年   162篇
  2008年   153篇
  2007年   136篇
  2006年   118篇
  2005年   85篇
  2004年   66篇
  2003年   82篇
  2002年   61篇
  2001年   52篇
  2000年   49篇
  1999年   107篇
  1998年   613篇
  1997年   397篇
  1996年   267篇
  1995年   181篇
  1994年   160篇
  1993年   155篇
  1992年   48篇
  1991年   69篇
  1990年   52篇
  1989年   47篇
  1988年   42篇
  1987年   56篇
  1986年   50篇
  1985年   55篇
  1984年   26篇
  1983年   31篇
  1982年   18篇
  1981年   27篇
  1980年   27篇
  1979年   9篇
  1978年   19篇
  1977年   59篇
  1976年   129篇
  1973年   9篇
排序方式: 共有7499条查询结果,搜索用时 15 毫秒
91.
Cloud computing is an emerging technology in which information technology resources are virtualized to users in a set of computing resources on a pay‐per‐use basis. It is seen as an effective infrastructure for high performance applications. Divisible load applications occur in many scientific and engineering applications. However, dividing an application and deploying it in a cloud computing environment face challenges to obtain an optimal performance due to the overheads introduced by the cloud virtualization and the supporting cloud middleware. Therefore, we provide results of series of extensive experiments in scheduling divisible load application in a Cloud environment to decrease the overall application execution time considering the cloud networking and computing capacities presented to the application's user. We experiment with real applications within the Amazon cloud computing environment. Our extensive experiments analyze the reasons of the discrepancies between a theoretical model and the reality and propose adequate solutions. These discrepancies are due to three factors: the network behavior, the application behavior and the cloud computing virtualization. Our results show that applying the algorithm result in a maximum ratio of 1.41 of the measured normalized makespan versus the ideal makespan for application in which the communication to computation ratio is big. They show that the algorithm is effective for those applications in a heterogeneous setting reaching a ratio of 1.28 for large data sets. For application following the ensemble clustering model in which the computation to communication ratio is big and variable, we obtained a maximum ratio of 4.7 for large data set and a ratio of 2.11 for small data set. Applying the algorithm also results in an important speedup. These results are revealing for the type of applications we consider under experiments. The experiments also reveal the impact of the choice of the platforms provided by Amazon on the performance of the applications under study. Considering the emergence of cloud computing for high performance applications, the results in this paper can be widely adopted by cloud computing developers. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   
92.
Use Case modeling is a popular technique for documenting functional requirements of software systems. Refactoring is the process of enhancing the structure of a software artifact without changing its intended behavior. Refactoring, which was first introduced for source code, has been extended for use case models. Antipatterns are low quality solutions to commonly occurring design problems. The presence of antipatterns in a use case model is likely to propagate defects to other software artifacts. Therefore, detection and refactoring of antipatterns in use case models is crucial for ensuring the overall quality of a software system. Model transformation can greatly ease several software development activities including model refactoring. In this paper, a model transformation approach is proposed for improving the quality of use case models. Model transformations which can detect antipattern instances in a given use case model, and refactor them appropriately are defined and implemented. The practicability of the approach is demonstrated by applying it on a case study that pertains to biodiversity database system. The results show that model transformations can efficiently improve quality of use case models by saving time and effort.  相似文献   
93.
Artificial neural networks modeling have recently acquired enormous importance in microwave community especially in analyzing and synthesizing of microstrip antennas (MSAs) due to their generalization and adaptability features. A trained neural model estimates response very fast, which is nearly equal to its measured and/or simulated counterpart. Thus, it completely bypasses the repetitive use of conventional models as these models need rediscretization for every minor changes in the geometry, which itself is a time‐consuming exercise. The purpose of this article is to review this emerging area comprehensively for both analyzing and synthesizing of the MSAs. During reviewing process, some untouched cases are also observed, which are essentially required to be resolved for antenna designers. Unique and efficient neural networks‐based solutions are suggested for these cases. The proposed neural approaches are validated by fabricating and characterizing of the prototypes too. © 2015 Wiley Periodicals, Inc. Int J RF and Microwave CAE 25:747–757, 2015.  相似文献   
94.
This paper proposes a new feature extraction technique using wavelet based sub-band parameters (WBSP) for classification of unaspirated Hindi stop consonants. The extracted acoustic parameters show marked deviation from the values reported for English and other languages, Hindi having distinguishing manner based features. Since acoustic parameters are difficult to be extracted automatically for speech recognition. Mel Frequency Cepstral Coefficient (MFCC) based features are usually used. MFCC are based on short time Fourier transform (STFT) which assumes the speech signal to be stationary over a short period. This assumption is specifically violated in case of stop consonants. In WBSP, from acoustic study, the features derived from CV syllables have different weighting factors with the middle segment having the maximum. The wavelet transform has been applied to splitting of signal into 8 sub-bands of different bandwidths and the variation of energy in different sub-bands is also taken into account. WBSP gives improved classification scores. The number of filters used (8) for feature extraction in WBSP is less compared to the number (24) used for MFCC. Its classification performance has been compared with four other techniques using linear classifier. Further, Principal components analysis (PCA) has also been applied to reduce dimensionality.  相似文献   
95.
ABSTRACT

The quality of user-generated content over World Wide Web media is a matter of serious concern for both creators and users. To measure the quality of content, webometric techniques are commonly used. In recent times, bibliometric techniques have been introduced to good effect for evaluation of the quality of user-generated content, which were originally used for scholarly data. However, the application of bibliometric techniques to evaluate the quality of YouTube content is limited to h-index and g-index considering only views. This paper advocates for and demonstrates the adaptation of existing Bibliometric indices including h-index, g-index and M-index exploiting both views and comments and proposes three indices hvc, gvc and mvc for YouTube video channel ranking. The empirical results prove that the proposed indices using views along with the comments outperform the existing approaches on a real-world dataset of YouTube.  相似文献   
96.
The purpose of this study was to compare the wear of zirconia-toughened alumina (ZTA) and alumina femoral heads tested against as-cast CoCrMo alloy acetabular cups under both standard and severe wear conditions. A new severe test, which included medio-lateral displacement of the head and rim impact upon relocation, was developed. This resulted in an area of metal transfer and an area of increased wear on the superior-anterior segment of the head that were thought to be due to dislocation and rim impact respectively. While the wear of all ceramic heads was immeasurable using the gravimetric method, the wear rates for the metallic cups from each test were readily calculated. An average steady state wear rate of 0.023 +/- 0.005 mm3/10(6) cycles was found for the cups articulating against ZTA under standard wear conditions. A similar result had previously been obtained for the wear of cups articulated against alumina heads of the same size (within the same laboratory). Under severe wear conditions an increase in the metallic cup steady state wear rate was found with the ZTA and alumina tests giving 0.623 +/- 0.252 and 1.35 +/- 0.154 mm3/10(6) cycles respectively. Wear of the ceramic heads was detected using atomic force microscopy which showed, under severe wear conditions, a decrease in polishing marks and occasional grain removal. The surfaces of the ZTA heads tested under standard conditions were virtually unchanged from the unworn samples. Friction tests showed low friction factors for all components, pre and post wear.  相似文献   
97.
98.
During the late 1990s and early 2000s, the profile of global manufacturing has experienced many changes. There is anecdotal evidence that many western manufacturing companies have chosen to expand their manufacturing base across geographical boundaries. The common reasons sited for these ventures are to exploit less expensive labour markets, to establish a presence in expanding markets and in response to the threat of new competition. Whilst a global manufacturing base can prove to have many cost and sales benefits, there are also many disadvantages. Logistics operations can often increase in complexity leading to higher reliance on planning and effective interpretation of demand data. In response, systems modelling has remerged as a fertile research area after many years. Many modelling and simulation techniques have been developed, but these have had very limited practical success. The authors have identified that majority of these simulation techniques rely upon a detailed market structure being known, when this is rarely the case. This paper describes the outcome of a research project to develop of a pragmatic set of tools to gather, assess and verify supply chain structure data. A hybrid collection of technologies are utilised to assist these operations and to build a dynamic supply network model.  相似文献   
99.
The possibility to fill cavities of finite geometry could be described using an analytical model of the hot-embossing process of viscoelastic polymers. This model is based on the volume conservation during the forming process which allows to predict data concerning the geometrical evolution of the material on one hand, and on the other hand the filling time of cavities in the mould. A particular attention was drawn on the necessary time to fill the cavities depending on their shape or a scale factor for a given cavity shape.  相似文献   
100.
With the explosive growth in computers and the growing scarcity in electric supply, reduction of energy consumption in large-scale computing systems has become a research issue of paramount importance. In this paper, we study the problem of allocation of tasks onto a computational grid, with the aim to simultaneously minimize the energy consumption and the makespan subject to the constraints of deadlines and tasks' architectural requirements. We propose a solution from cooperative game theory based on the concept of Nash Bargaining Solution. In this cooperative game, machines collectively arrive at a decision that describes the task allocation that is collectively best for the system, ensuring that the allocations are both energy and makespan optimized. Through rigorous mathematical proofs we show that the proposed cooperative game in mere O(n mlog(m)) time (where n is the number of tasks and m is the number of machines in the system) produces a Nash Bargaining Solution that guarantees Pareto-optimally. The simulation results show that the proposed technique achieves superior performance compared to the Greedy and Linear Relaxation (LR) heuristics, and with competitive performance relative to the optimal solution implemented in LINDO for small-scale problems.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号