首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   132篇
  免费   3篇
电工技术   2篇
化学工业   20篇
金属工艺   1篇
机械仪表   1篇
建筑科学   9篇
轻工业   2篇
水利工程   1篇
无线电   12篇
一般工业技术   20篇
冶金工业   8篇
自动化技术   59篇
  2021年   5篇
  2020年   5篇
  2019年   4篇
  2018年   8篇
  2016年   2篇
  2015年   4篇
  2014年   2篇
  2013年   5篇
  2012年   9篇
  2011年   5篇
  2010年   11篇
  2009年   10篇
  2008年   13篇
  2007年   7篇
  2006年   5篇
  2005年   5篇
  2004年   4篇
  2003年   1篇
  2002年   6篇
  1999年   2篇
  1998年   2篇
  1997年   3篇
  1996年   5篇
  1995年   2篇
  1990年   1篇
  1986年   2篇
  1985年   2篇
  1981年   2篇
  1980年   2篇
  1967年   1篇
排序方式: 共有135条查询结果,搜索用时 750 毫秒
51.
We describe several observations regarding the completeness and the complexity of bounded model checking and propose techniques to solve some of the associated computational challenges. We begin by defining the completeness threshold ( ) problem: for every finite model M and an LTL property , there exists a number such that if there is no counterexample to in M of length or less, then M . Finding this number, if it is sufficiently small, offers a practical method for making bounded model checking complete. We describe how to compute an overapproximation to for a general LTL property using Büchi automata, following the Vardi–Wolper LTL model checking framework. This computation is based on finding the initialized diameter and initialized recurrence-diameter (the longest loop-free path from an initial state) of the product automaton. We show a method for finding a recurrence diameter with a formula of size O(klogk) (or O(k(logk)2) in practice), where k is the attempted depth, which is an improvement compared to the previously known method that requires a formula of size in O(k2). Based on the value of , we prove that the complexity of standard SAT-based BMC is doubly exponential and that, consequently, there is a complexity gap of an exponent between this procedure and standard LTL model checking. We discuss ways to bridge this gap.  相似文献   
52.
The standard analysis of reaction networks based on deterministic rate equations fails in confined geometries, commonly encountered in fields such as astrochemistry, thin-film growth and cell biology. In these systems the small reactant population implies anomalous behavior of reaction rates, which can be accounted for only by following the full distribution of reactant numbers.  相似文献   
53.
54.
We consider the problem of smoothing real-time streams (such as video streams), where the goal is to reproduce a variable-bandwidth stream remotely, while minimizing bandwidth cost, space requirement, and playback delay. We focus on lossy schedules, where data may be dropped due to limited bandwidth or space. We present the following results. First, we determine the optimal tradeoff between buffer space, smoothing delay, and link bandwidth for lossy smoothing schedules. Specifically, this means that if two of these parameters are given, we can precisely calculate the value for the third which minimizes data loss while avoiding resource wastage. The tradeoff is accomplished by a simple generic algorithm, that allows one some freedom in choosing which data to discard. This algorithm is very easy to implement both at the server and at the client, and it enjoys the nice property that only the server decides which data to discard, and the client needs only to reconstruct the stream.In a second set of results we study the case where different parts of the data have different importance, modeled by assigning a real weight to each packet in the stream. For this setting we use competitive analysis, i.e., we compare the weight delivered by on-line algorithms to the weight of an optimal off-line schedule using the same resources. We prove that a natural greedy algorithm is 4-competitive. We also prove a lower bound of 1.23 on the competitive ratio of any deterministic on-line algorithm. Finally, we give a few experimental results which seem to indicate that smoothing is very effective in practice, and that the greedy algorithm performs very well in the weighted case.Received: 21 November 2001, Accepted: 6 November 2003, Published online: 6 February 2004Research supported in part by Israel Ministry of Science. An extended abstract of this paper appeared in Proc. 19th ACM Symp. on Principles of Distributed Computing, July 2000.  相似文献   
55.
56.
The learning-based automated Assume–Guarantee reasoning paradigm has been applied in the last few years for the compositional verification of concurrent systems. Specifically, L* has been used for learning the assumption, based on strings derived from counterexamples, which are given to it by a model-checker that attempts to verify the Assume–Guarantee rules. We suggest three optimizations to this paradigm. First, we derive from each counterexample multiple strings to L*, rather than a single one as in previous approaches. This small improvement saves candidate queries and hence model-checking runs. Second, we observe that in existing instances of this paradigm, the learning algorithm is coupled weakly with the teacher. Thus, the learner completely ignores the details of the internal structure of the system and specification being verified, which are available already to the teacher. We suggest an optimization that uses this information in order to avoid many unnecessary membership queries (it reduces the number of such queries by more than an order of magnitude). Finally, we develop a method for minimizing the alphabet used by the assumption, which reduces the size of the assumption and the number of queries required to construct it. We present these three optimizations in the context of verifying trace containment for concurrent systems composed of finite state machines. We have implemented our approach in the ComFoRT tool, and experimented with real-life examples. Our results exhibit an average speedup of between 4 to 11 times, depending on the Assume–Guarantee rule used and the set of activated optimizations. This research was supported by the Predictable Assembly from Certifiable Components (PACC) initiative at the Software Engineering Institute, Pittsburgh.  相似文献   
57.
Crambin, a small hydrophobic protein (4.7 kDa and 46 residues),has been successfully expressed in Escherichia coli from anartificial, synthetic gene. Several expression systems wereinvestigated. Ultimately, crambin was successfully expressedas a fusion protein with the maltose binding protein, whichwas purified by affinity chromatography. Crambin expressed asa C-terminal domain was then cleaved from the fusion proteinwith Factor Xa protease and purified. Circular dichroism spectroscopyand amino acid analysis suggested that the purified materialwas identical to crambin isolated from seed. For positive identificationthe protein was crystallized from an ethanol–water solution,by a novel method involving the inclusion of phospholipids inthe crystallization buffer, and then subjected to crystallographicanalysis, Diffraction data were collected at the Brookhavensynchrotron (beamline-X12C) to a resolution of 1.32 Åat 150 K. The structure, refined to an R value of 9.6%, confirmedthat the cloned protein was crambin. The availability of clonedcrambin will allow site-specific mutagenesis studies to be performedon the protein known to the highest resolution.  相似文献   
58.
59.
Melnik  Ofer 《Machine Learning》2002,48(1-3):321-351
In this paper we present a method to extract qualitative information from any classification model that uses decision regions to generalize (e.g., feed-forward neural nets, SVMs, etc). The method's complexity is independent of the dimensionality of the input data or model, making it computationally feasible for the analysis of even very high-dimensional models. The qualitative information extracted by the method can be directly used to analyze the classification strategies employed by a model, and also to compare strategies across different model types.  相似文献   
60.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号