首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   779篇
  免费   45篇
  国内免费   12篇
电工技术   16篇
综合类   7篇
化学工业   157篇
金属工艺   24篇
机械仪表   41篇
建筑科学   28篇
矿业工程   3篇
能源动力   40篇
轻工业   60篇
水利工程   8篇
石油天然气   10篇
无线电   74篇
一般工业技术   197篇
冶金工业   44篇
原子能技术   6篇
自动化技术   121篇
  2024年   2篇
  2023年   12篇
  2022年   27篇
  2021年   52篇
  2020年   39篇
  2019年   51篇
  2018年   50篇
  2017年   52篇
  2016年   45篇
  2015年   36篇
  2014年   43篇
  2013年   67篇
  2012年   54篇
  2011年   60篇
  2010年   55篇
  2009年   40篇
  2008年   26篇
  2007年   19篇
  2006年   12篇
  2005年   11篇
  2004年   8篇
  2003年   5篇
  2002年   8篇
  2001年   4篇
  2000年   4篇
  1999年   2篇
  1998年   12篇
  1997年   7篇
  1996年   6篇
  1995年   8篇
  1994年   2篇
  1993年   9篇
  1992年   1篇
  1991年   1篇
  1990年   3篇
  1989年   1篇
  1987年   1篇
  1983年   1篇
排序方式: 共有836条查询结果,搜索用时 0 毫秒
61.
Load balancing is an important stage of a system using parallel computing where the aim is the balance of workload among all processors of the system. In this paper, we introduce a new load balancing algorithm with new capabilities for parallel systems, among which is the independence of a separate route-finder algorithm between the load receiver and sender nodes. In addition to simulation of the new algorithm, due to similarity in behavior to the proposed algorithm, the central algorithm is simulated. Simulation results show that, the system performance increases with the increase of the degree of neighborhood between the processors. These results also indicate the algorithm’s high compatibility with environment changes.  相似文献   
62.
In this paper, three issues related to three‐dimensional multilink rigid body systems are considered: dynamics, actuation, and inversion. Based on the Newton‐Euler equations, a state space formulation of the dynamics is discussed that renders itself to inclusion of actuators, and allows systematic ways of stabilization and construction of inverse systems. The development here is relevant to robotic systems, biological modeling, humanoid studies, and collaborating man‐machine systems. The recursive dynamic formulation involves a method for sequential measurement and estimation of joint forces and couples for an open chain system. The sequence can start from top downwards or from the ground upwards. Three‐dimensional actuators that produce couples at the joints are included in the dynamics. Inverse methods that allow estimation of these couples from the kinematic trajectories and physical parameters of the system are developed. The formulation and derivations are carried out for a two‐link system. Digital computer simulations of a two‐rigid body system are presented to demonstrate the feasibility and effectiveness of the methods. © 2005 Wiley Periodicals, Inc.  相似文献   
63.
64.
Azolylalkylquinolines (AAQs) are a family of quinolines with varying degrees of cytotoxic activity (comparable or moderately superior to adriamycin in some cases) developed in the past decade in our group where their exact mode of action is still unclear. In this study the most probable DNA binding mode of AAQs was investigated employing a novel flexible ligand docking approach by using AutoDock 3.0. Forty-nine AAQs with known experimental inhibitory activity were docked onto d(CGCAAATTTGCG)(2), d(CGATCG)(2) and d(CGCG)(2) oligonucleotides retrieved from the Protein Data Bank (PDB IDs: 102D, 1D12 and 1D32, respectively) as the representatives of the three plausible models of interactions between chemotherapeutic agents and DNA (groove binding, groove binding plus intercalation and bisintercalation, respectively). Good correlation (r(2)=0.64) between calculated binding energies and experimental inhibitory activities was obtained using groove binding plus intercalation model for phenyl-azolylalkylquinoline (PAAQ) series. Our findings show that the most probable mode of action of PAAQs as DNA binding agents is via intercalation of quinolinic moiety between CG base pairs with linker chain and azole moiety binding to the minor groove.  相似文献   
65.
The quantity of information placed on the web has been greater than before and is increasing rapidly day by day. Searching through the huge amount of data and finding the most relevant and useful result set involves searching, ranking, and presenting the results. Most of the users probe into the top few results and neglect the rest. In order to increase user’s satisfaction, the presented result set should not only be relevant to the search topic, but should also present a variety of perspectives, that is, the results should be different from one another. The effectiveness of web search and the satisfaction of users can be enhanced through providing various results of a search query in a certain order of relevance and concern. The technique used to avoid presenting similar, though relevant, results to the user is known as a diversification of search results. This article presents a survey of the approaches used for search result diversification. To this end, this article not only provides a technical survey of existing diversification techniques, but also presents a taxonomy of diversification algorithms with respect to the types of search queries.  相似文献   
66.
In Classical Bayesian approach, estimation of lifetime data usually is dealing with precise information. However, in real world, some informations about an underlying system might be imprecise and represented in the form of vague quantities. In these situations, we need to generalize classical methods to vague environment for studying and analyzing the systems of interest. In this paper, we propose the Bayesian estimation of failure rate and mean time to failure based on vague set theory in the case of complete and censored data sets. To employ the Bayesian approach, model parameters are assumed to be vague random variables with vague prior distributions. This approach will be used to induce the vague Bayes estimate of failure rate and mean time to failure by introducing and applying a theorem called “Resolution Identity” for vague sets. In order to evaluate the membership degrees of vague Bayesian estimate for these quantities, a computational procedure is investigated. In the proposed method, the original problem is transformed into a nonlinear programming problem which is then divided into eight subproblems to simplifying computations.  相似文献   
67.
Coronavirus disease (COVID-19) is a pandemic that has caused thousands of casualties and impacts all over the world. Most countries are facing a shortage of COVID-19 test kits in hospitals due to the daily increase in the number of cases. Early detection of COVID-19 can protect people from severe infection. Unfortunately, COVID-19 can be misdiagnosed as pneumonia or other illness and can lead to patient death. Therefore, in order to avoid the spread of COVID-19 among the population, it is necessary to implement an automated early diagnostic system as a rapid alternative diagnostic system. Several researchers have done very well in detecting COVID-19; however, most of them have lower accuracy and overfitting issues that make early screening of COVID-19 difficult. Transfer learning is the most successful technique to solve this problem with higher accuracy. In this paper, we studied the feasibility of applying transfer learning and added our own classifier to automatically classify COVID-19 because transfer learning is very suitable for medical imaging due to the limited availability of data. In this work, we proposed a CNN model based on deep transfer learning technique using six different pre-trained architectures, including VGG16, DenseNet201, MobileNetV2, ResNet50, Xception, and EfficientNetB0. A total of 3886 chest X-rays (1200 cases of COVID-19, 1341 healthy and 1345 cases of viral pneumonia) were used to study the effectiveness of the proposed CNN model. A comparative analysis of the proposed CNN models using three classes of chest X-ray datasets was carried out in order to find the most suitable model. Experimental results show that the proposed CNN model based on VGG16 was able to accurately diagnose COVID-19 patients with 97.84% accuracy, 97.90% precision, 97.89% sensitivity, and 97.89% of F1-score. Evaluation of the test data shows that the proposed model produces the highest accuracy among CNNs and seems to be the most suitable choice for COVID-19 classification. We believe that in this pandemic situation, this model will support healthcare professionals in improving patient screening.  相似文献   
68.
69.
Clustering data streams has drawn lots of attention in the last few years due to their ever-growing presence. Data streams put additional challenges on clustering such as limited time and memory and one pass clustering. Furthermore, discovering clusters with arbitrary shapes is very important in data stream applications. Data streams are infinite and evolving over time, and we do not have any knowledge about the number of clusters. In a data stream environment due to various factors, some noise appears occasionally. Density-based method is a remarkable class in clustering data streams, which has the ability to discover arbitrary shape clusters and to detect noise. Furthermore, it does not need the nmnber of clusters in advance. Due to data stream characteristics, the traditional density-based clustering is not applicable. Recently, a lot of density-based clustering algorithms are extended for data streams. The main idea in these algorithms is using density- based methods in the clustering process and at the same time overcoming the constraints, which are put out by data streanFs nature. The purpose of this paper is to shed light on some algorithms in the literature on density-based clustering over data streams. We not only summarize the main density-based clustering algorithms on data streams, discuss their uniqueness and limitations, but also explain how they address the challenges in clustering data streams. Moreover, we investigate the evaluation metrics used in validating cluster quality and measuring algorithms' performance. It is hoped that this survey will serve as a steppingstone for researchers studying data streams clustering, particularly density-based algorithms.  相似文献   
70.
Covering problems are fundamental classical problems in optimization, computer science and complexity theory. Typically an input to these problems is a family of sets over a finite universe and the goal is to cover the elements of the universe with as few sets of the family as possible. The variations of covering problems include well-known problems like Set Cover, Vertex Cover, Dominating Set and Facility Location to name a few. Recently there has been a lot of study on partial covering problems, a natural generalization of covering problems. Here, the goal is not to cover all the elements but to cover the specified number of elements with the minimum number of sets. In this paper we study partial covering problems in graphs in the realm of parameterized complexity. Classical (non-partial) version of all these problems has been intensively studied in planar graphs and in graphs excluding a fixed graph H as a minor. However, the techniques developed for parameterized version of non-partial covering problems cannot be applied directly to their partial counterparts. The approach we use, to show that various partial covering problems are fixed parameter tractable on planar graphs, graphs of bounded local treewidth and graph excluding some graph as a minor, is quite different from previously known techniques. The main idea behind our approach is the concept of implicit branching. We find implicit branching technique to be interesting on its own and believe that it can be used for some other problems.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号