首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1015篇
  免费   70篇
  国内免费   3篇
电工技术   10篇
化学工业   293篇
金属工艺   13篇
机械仪表   10篇
建筑科学   31篇
矿业工程   3篇
能源动力   18篇
轻工业   238篇
水利工程   3篇
石油天然气   5篇
无线电   39篇
一般工业技术   133篇
冶金工业   149篇
原子能技术   4篇
自动化技术   139篇
  2024年   2篇
  2023年   11篇
  2022年   55篇
  2021年   58篇
  2020年   27篇
  2019年   34篇
  2018年   35篇
  2017年   33篇
  2016年   30篇
  2015年   34篇
  2014年   49篇
  2013年   83篇
  2012年   56篇
  2011年   68篇
  2010年   47篇
  2009年   53篇
  2008年   43篇
  2007年   37篇
  2006年   34篇
  2005年   26篇
  2004年   32篇
  2003年   35篇
  2002年   19篇
  2001年   12篇
  2000年   16篇
  1999年   15篇
  1998年   22篇
  1997年   17篇
  1996年   13篇
  1995年   14篇
  1994年   9篇
  1993年   11篇
  1991年   2篇
  1990年   2篇
  1989年   6篇
  1988年   3篇
  1987年   4篇
  1986年   3篇
  1985年   4篇
  1984年   3篇
  1983年   2篇
  1982年   7篇
  1981年   4篇
  1978年   2篇
  1976年   2篇
  1974年   2篇
  1969年   2篇
  1968年   2篇
  1960年   1篇
  1945年   1篇
排序方式: 共有1088条查询结果,搜索用时 78 毫秒
991.
Solving Mixed and Conditional Constraint Satisfaction Problems   总被引:3,自引:0,他引:3  
Constraints are a powerful general paradigm for representing knowledge in intelligent systems. The standard constraint satisfaction paradigm involves variables over a discrete value domain and constraints which restrict the solutions to allowed value combinations. This standard paradigm is inapplicable to problems which are either:(a) mixed, involving both numeric and discrete variables, or(b) conditional,1 containing variables whose existence depends on the values chosen for other variables, or(c) both, conditional and mixed.We present a general formalism which handles both exceptions in an integral search framework. We solve conditional problems by analyzing dependencies between constraints that enable us to directly compute all possible configurations of the CSP rather than discovering them during search. For mixed problems, we present an enumeration scheme that integrates numeric variables with discrete ones in a single search process. Both techniques take advantage of enhanced propagation rule for numeric variables that results in tighter labelings than the algorithms commonly used. From real world examples in configuration and design, we identify several types of mixed constraints, i.e. constraints defined over numeric and discrete variables, and propose new propagation rules in order to take advantage of these constraints during problem solving.  相似文献   
992.
Magnetic resonance (MR) tomographic images are routinely used in diagnosis of liver pathologies. Liver segmentation is needed for these types of images. It is therefore an important requirement for later tasks such as comparison among studies of different patients, as well as studies of the same patient (including those taken during the diffusion of a contrast, as in perfusion MR imaging). However, automatic segmentation of the liver is a challenging task due to certain reasons such as the high variability of liver shapes, similar intensity values and unclear contours between the liver and surrounding organs, especially in perfusion MR images. In order to overcome these limitations, this work proposes the use of a probabilistic atlas for liver segmentation in perfusion MR images, and the combination of the information gathered with that provided by level-based segmentation methods. The process starts with an under-segmented shape that grows slice by slice using morphological techniques (namely, viscous reconstruction); the result of the closest segmented slice and the probabilistic information provided by the atlas. Experiments with a collection of manually segmented liver images are provided, including numerical evaluation using widely accepted metrics for shape comparison.  相似文献   
993.
Many distributed database applications need to replicate data to improve data availability and query response time. The two-phase commit protocol guarantees mutual consistency of replicated data but does not provide good performance. Lazy replication has been used as an alternative solution in several types of applications such as on-line financial transactions and telecommunication systems. In this case, mutual consistency is relaxed and the concept of freshness is used to measure the deviation between replica copies. In this paper, we propose two update propagation strategies that improve freshness. Both of them use immediate propagation: updates to a primary copy are propagated towards a slave node as soon as they are detected at the master node without waiting for the commitment of the update transaction. Our performance study shows that our strategies can improve data freshness by up to five times compared with the deferred approach. Received April 24, 1998 / Revised June 7, 1999  相似文献   
994.
This empirical study consists in an investigation of the effects, on the development of Information Problem Solving (IPS) skills, of a long-term embedded, structured and supported instruction in Secondary Education. Forty secondary students of 7th and 8th grades (13–15 years old) participated in the 2-year IPS instruction designed in this study. Twenty of them participated in the IPS instruction, and the remaining twenty were the control group. All the students were pre- and post-tested in their regular classrooms, and their IPS process and performance were logged by means of screen capture software, to warrant their ecological validity. The IPS constituent skills, the web search sub-skills and the answers given by each participant were analyzed. The main findings of our study suggested that experimental students showed a more expert pattern than the control students regarding the constituent skill ‘defining the problem’ and the following two web search sub-skills: ‘search terms’ typed in a search engine, and ‘selected results’ from a SERP. In addition, scores of task performance were statistically better in experimental students than in control group students. The paper contributes to the discussion of how well-designed and well-embedded scaffolds could be designed in instructional programs in order to guarantee the development and efficiency of the students’ IPS skills by using net information better and participating fully in the global knowledge society.  相似文献   
995.
Minimal siphons in the class of S 4 PR nets have become a conceptual and practical central tool for the study of the resource allocation related aspects in discrete event dynamic systems as, for example, the existence of deadlocks. Therefore the availability of efficient algorithms to compute the minimal siphons is essential. In this paper we try to take advantage of the particular properties of the siphons in S 4 PR to obtain an efficient algorithm. These properties allow us to express minimal siphons as the union of pruned minimal siphons containing only one resource. The pruning operation is built from the binary pruning relation defined on the set of minimal siphons containing only one resource. This pruning relation is represented by means of a directed graph. The computation of the minimal siphons is based on the maximal strongly connected components of this graph. The algorithm is highly economic in memory in all intermediate steps when compared to the classical algorithms.  相似文献   
996.
We present a new approach for the problem of finding overlapping communities in graphs and social networks. Our approach consists of a novel problem definition and three accompanying algorithms. We are particularly interested in graphs that have labels on their vertices, although our methods are also applicable to graphs with no labels. Our goal is to find k communities so that the total edge density over all k communities is maximized. In the case of labeled graphs, we require that each community is succinctly described by a set of labels. This requirement provides a better understanding for the discovered communities. The proposed problem formulation leads to the discovery of vertex-overlapping and dense communities that cover as many graph edges as possible. We capture these properties with a simple objective function, which we solve by adapting efficient approximation algorithms for the generalized maximum-coverage problem and the densest-subgraph problem. Our proposed algorithm is a generic greedy scheme. We experiment with three variants of the scheme, obtained by varying the greedy step of finding a dense subgraph. We validate our algorithms by comparing with other state-of-the-art community-detection methods on a variety of performance measures. Our experiments confirm that our algorithms achieve results of high quality in terms of the reported measures, and are practical in terms of performance.  相似文献   
997.
998.
999.
One of the great challenges the information society faces is dealing with the huge amount of information generated and handled daily on the Internet. Today, progress in Big data proposals attempt to solve this problem, but there are certain limitations to information search and retrieval due basically to the large volumes handled, the heterogeneity of the information, and its dispersion among a multitude of sources. In this article, a formal framework is defined to facilitate the design and development of an environmental management information system, which works with a heterogeneous and large amount of data. Nevertheless, this framework can be applied to other information systems that work with Big data, because it does not depend on the type of data and can be utilized in other domains. The framework is based on an ontological web‐trading model (OntoTrader), which follows model‐driven engineering and ontology‐driven engineering guidelines to separate the system architecture from its implementation. The proposal is accompanied by a case study, SOLERES‐KRS, an environmental knowledge representation system designed and developed using software agents and multi‐agent systems. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   
1000.
Nowadays, more and more computer-based scientific experiments need to handle massive amounts of data. Their data processing consists of multiple computational steps and dependencies within them. A data-intensive scientific workflow is useful for modeling such process. Since the sequential execution of data-intensive scientific workflows may take much time, Scientific Workflow Management Systems (SWfMSs) should enable the parallel execution of data-intensive scientific workflows and exploit the resources distributed in different infrastructures such as grid and cloud. This paper provides a survey of data-intensive scientific workflow management in SWfMSs and their parallelization techniques. Based on a SWfMS functional architecture, we give a comparative analysis of the existing solutions. Finally, we identify research issues for improving the execution of data-intensive scientific workflows in a multisite cloud.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号