首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   248篇
  免费   10篇
  国内免费   3篇
电工技术   4篇
化学工业   46篇
金属工艺   1篇
机械仪表   2篇
建筑科学   5篇
能源动力   6篇
轻工业   13篇
水利工程   2篇
石油天然气   2篇
无线电   31篇
一般工业技术   26篇
冶金工业   42篇
自动化技术   81篇
  2023年   1篇
  2022年   6篇
  2021年   6篇
  2020年   3篇
  2019年   3篇
  2018年   4篇
  2017年   5篇
  2016年   10篇
  2015年   9篇
  2014年   7篇
  2013年   14篇
  2012年   9篇
  2011年   10篇
  2010年   5篇
  2009年   16篇
  2008年   15篇
  2007年   12篇
  2006年   9篇
  2005年   12篇
  2004年   10篇
  2003年   8篇
  2002年   9篇
  2001年   5篇
  2000年   4篇
  1999年   7篇
  1998年   13篇
  1997年   13篇
  1996年   7篇
  1995年   5篇
  1994年   3篇
  1993年   5篇
  1992年   2篇
  1991年   4篇
  1989年   1篇
  1988年   2篇
  1987年   2篇
  1985年   1篇
  1982年   1篇
  1979年   1篇
  1978年   1篇
  1971年   1篇
排序方式: 共有261条查询结果,搜索用时 15 毫秒
11.
Derandomized graph products   总被引:1,自引:0,他引:1  
Berman and Schnitger gave a randomized reduction from approximating MAX-SNP problems within constant factors arbitrarily close to 1 to approximating clique within a factor ofn (for some ). This reduction was further studied by Blum, who gave it the namerandomized graph products. We show that this reduction can be made deterministic (derandomized), using random walks on expander graphs. The main technical contribution of this paper is in proving a lower bound for the probability that all steps of a random walk stay within a specified set of vertices of a graph. (Previous work was mainly concerned with upper bounds for this probability.) This lower bound extends also to the case where different sets of vertices are specified for different time steps of the walk.  相似文献   
12.
The control algorithm based on the uncertainty and disturbance estimator (UDE) is a robust control strategy and has received wide attention in recent years. In this paper, the two‐degree‐of‐freedom nature of UDE‐based controllers is revealed. The set‐point tracking response is determined by the reference model, whereas the disturbance response and robustness are determined by the error feedback gain and the filter introduced to estimate the uncertainty and disturbances. It is also revealed that the error dynamics of the system is determined by two filters, of which one is determined by the error feedback gain and the other is determined by the filter introduced to estimate the uncertainty and disturbances. The design of these two filters are decoupled in the frequency domain. Moreover, after introducing the UDE‐based control, the Laplace transform can be applied to some time‐varying systems for analysis and design because all the time‐varying parts are lumped into a signal. It has been shown that, in addition to the known advantages over the time‐delay control, the UDE‐based control also brings better performance than the time‐delay control under the same conditions. Design examples and simulation results are given to demonstrate the findings. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   
13.
Many times, even if a crowd simulation looks good in general, there could be some specific individual behaviors which do not seem correct. Spotting such problems manually can become tedious, but ignoring them may harm the simulation's credibility. In this paper we present a data‐driven approach for evaluating the behaviors of individuals within a simulated crowd. Based on video‐footage of a real crowd, a database of behavior examples is generated. Given a simulation of a crowd, an analog analysis is performed on it, defining a set of queries, which are matched by a similarity function to the database examples. The results offer a possible objective answer to the question of how similar are the simulated individual behaviors to real observed behaviors. Moreover, by changing the video input one can change the context of evaluation. We show several examples of evaluating simulated crowds produced using different techniques and comprising of dense crowds, sparse crowds and flocks.  相似文献   
14.
The Meteor metric for automatic evaluation of machine translation   总被引:1,自引:1,他引:0  
The Meteor Automatic Metric for Machine Translation evaluation, originally developed and released in 2004, was designed with the explicit goal of producing sentence-level scores which correlate well with human judgments of translation quality. Several key design decisions were incorporated into Meteor in support of this goal. In contrast with IBM’s Bleu, which uses only precision-based features, Meteor uses and emphasizes recall in addition to precision, a property that has been confirmed by several metrics as being critical for high correlation with human judgments. Meteor also addresses the problem of reference translation variability by utilizing flexible word matching, allowing for morphological variants and synonyms to be taken into account as legitimate correspondences. Furthermore, the feature ingredients within Meteor are parameterized, allowing for the tuning of the metric’s free parameters in search of values that result in optimal correlation with human judgments. Optimal parameters can be separately tuned for different types of human judgments and for different languages. We discuss the initial design of the Meteor metric, subsequent improvements, and performance in several independent evaluations in recent years.  相似文献   
15.
16.
17.
Luke  Oren  Alon 《Journal of Web Semantics》2004,2(2):153-183
This paper investigates how the vision of the Semantic Web can be carried over to the realm of email. We introduce a general notion of semantic email, in which an email message consists of a structured query or update coupled with corresponding explanatory text. Semantic email opens the door to a wide range of automated, email-mediated applications with formally guaranteed properties. In particular, this paper introduces a broad class of semantic email processes. For example, consider the process of sending an email to a program committee, asking who will attend the PC dinner, automatically collecting the responses, and tallying them up. We define both logical and decision-theoretic models where an email process is modeled as a set of updates to a data set on which we specify goals via certain constraints or utilities. We then describe a set of inference problems that arise while trying to satisfy these goals and analyze their computational tractability. In particular, we show that for the logical model it is possible to automatically infer which email responses are acceptable w.r.t. a set of constraints in polynomial time, and for the decision-theoretic model it is possible to compute the optimal message-handling policy in polynomial time. In addition, we show how to automatically generate explanations for a process's actions, and identify cases where such explanations can be generated in polynomial time. Finally, we discuss our publicly available implementation of semantic email and outline research challenges in this realm.1  相似文献   
18.
The Semantic Web envisions a World Wide Web in which data is described with rich semantics and applications can pose complex queries. To this point, researchers have defined new languages for specifying meanings for concepts and developed techniques for reasoning about them, using RDF as the data model. To flourish, the Semantic Web needs to provide interoperability—both between sites with different terminologies and with existing data and the applications operating on them. To achieve this, we are faced with two problems. First, most of the world’s data is available not in RDF but in XML; XML and the applications consuming it rely not only on the domain structure of the data, but also on its document structure. Hence, to provide interoperability between such sources, we must map between both their domain structures and their document structures. Second, data management practitioners often prefer to exchange data through local point-to-point data translations, rather than mapping to common mediated schemas or ontologies.This paper describes the Piazza system, which addresses these challenges. Piazza offers a language for mediating between data sources on the Semantic Web, and it maps both the domain structure and document structure. Piazza also enables interoperation of XML data with RDF data that is accompanied by rich OWL ontologies. Mappings in Piazza are provided at a local scale between small sets of nodes, and our query answering algorithm is able to chain sets mappings together to obtain relevant data from across the Piazza network. We also describe an implemented scenario in Piazza and the lessons we learned from it.  相似文献   
19.
Conversion to ammonia with Haber–Bosch catalysts can be increased above 95% by selective absorption of ammonia by MgCl2. The maximum conversion depends on reaction and absorption equilibria. At very short times, the measured conversion rate is the same with and without absorption by the MgCl2 salt; the overall rate constants are comparable to those in the literature. At larger times, conversion to ammonia can be over seven times greater with MgCl2 than without. However, the overall rate constants can be over 10 times slower because they are controlled by ammonia diffusion in the solid salt. An approximate, pseudosteady state theory consistent with these results provides a strategy for improving the overall rate while keeping the conversion over 90%. For example, the absorption rates might be increased using smaller particles of absorbent on a porous inert absorbent support. The results provide part of the basis for designing small scale ammonia plants. © 2015 American Institute of Chemical Engineers AIChE J, 61: 1364–1371, 2015  相似文献   
20.
Behavioral neuroscience underwent a technology-driven revolution with the emergence of machine-vision and machine-learning technologies. These technological advances facilitated the generation of high-resolution, high-throughput capture and analysis of complex behaviors. Therefore, behavioral neuroscience is becoming a data-rich field. While behavioral researchers use advanced computational tools to analyze the resulting datasets, the search for robust and standardized analysis tools is still ongoing. At the same time, the field of genomics exploded with a plethora of technologies which enabled the generation of massive datasets. This growth of genomics data drove the emergence of powerful computational approaches to analyze these data. Here, we discuss the composition of a large behavioral dataset, and the differences and similarities between behavioral and genomics data. We then give examples of genomics-related tools that might be of use for behavioral analysis and discuss concepts that might emerge when considering the two fields together.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号