首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   133篇
  免费   4篇
电工技术   2篇
化学工业   21篇
金属工艺   1篇
机械仪表   1篇
建筑科学   9篇
轻工业   2篇
水利工程   1篇
无线电   13篇
一般工业技术   20篇
冶金工业   8篇
自动化技术   59篇
  2022年   1篇
  2021年   6篇
  2020年   5篇
  2019年   4篇
  2018年   8篇
  2016年   2篇
  2015年   4篇
  2014年   2篇
  2013年   5篇
  2012年   9篇
  2011年   5篇
  2010年   11篇
  2009年   10篇
  2008年   13篇
  2007年   7篇
  2006年   5篇
  2005年   5篇
  2004年   4篇
  2003年   1篇
  2002年   6篇
  1999年   2篇
  1998年   2篇
  1997年   3篇
  1996年   5篇
  1995年   2篇
  1990年   1篇
  1986年   2篇
  1985年   2篇
  1981年   2篇
  1980年   2篇
  1967年   1篇
排序方式: 共有137条查询结果,搜索用时 0 毫秒
51.
Sustainable rangeland stewardship calls for synoptic estimates of rangeland biomass quantity (kg dry matter ha− 1) and quality [carbon:nitrogen (C:N) ratio]. These data are needed to support estimates of rangeland crude protein in forage, either by percent (CPc) or by mass (CPm). Biomass derived from remote sensing data is often compromised by the presence of both photosynthetically active (PV) and non-photosynthetically active (NPV) vegetation. Here, we explicitly quantify PV and NPV biomass using HyMap hyperspectral imagery. Biomass quality, defined as plant C:N ratio, was also estimated using a previously published algorithm. These independent algorithms for forage quantity and quality (both PV and NPV) were evaluated in two northern mixed-grass prairie ecoregions, one in the Northwestern Glaciated Plains (NGGP) and one in the Northwestern Great Plains (NGP). Total biomass (kg ha− 1) and C:N ratios were mapped with 18% and 8% relative error, respectively. Outputs from both models were combined to quantify crude protein (kg ha− 1) on a pasture scale. Results suggest synoptic maps of rangeland vegetation mass (both PV and NPV) and quality may be derived from hyperspectral aerial imagery with greater than 80% accuracy.  相似文献   
52.
53.
We describe several observations regarding the completeness and the complexity of bounded model checking and propose techniques to solve some of the associated computational challenges. We begin by defining the completeness threshold ( ) problem: for every finite model M and an LTL property , there exists a number such that if there is no counterexample to in M of length or less, then M . Finding this number, if it is sufficiently small, offers a practical method for making bounded model checking complete. We describe how to compute an overapproximation to for a general LTL property using Büchi automata, following the Vardi–Wolper LTL model checking framework. This computation is based on finding the initialized diameter and initialized recurrence-diameter (the longest loop-free path from an initial state) of the product automaton. We show a method for finding a recurrence diameter with a formula of size O(klogk) (or O(k(logk)2) in practice), where k is the attempted depth, which is an improvement compared to the previously known method that requires a formula of size in O(k2). Based on the value of , we prove that the complexity of standard SAT-based BMC is doubly exponential and that, consequently, there is a complexity gap of an exponent between this procedure and standard LTL model checking. We discuss ways to bridge this gap.  相似文献   
54.
The standard analysis of reaction networks based on deterministic rate equations fails in confined geometries, commonly encountered in fields such as astrochemistry, thin-film growth and cell biology. In these systems the small reactant population implies anomalous behavior of reaction rates, which can be accounted for only by following the full distribution of reactant numbers.  相似文献   
55.
56.
We consider the problem of smoothing real-time streams (such as video streams), where the goal is to reproduce a variable-bandwidth stream remotely, while minimizing bandwidth cost, space requirement, and playback delay. We focus on lossy schedules, where data may be dropped due to limited bandwidth or space. We present the following results. First, we determine the optimal tradeoff between buffer space, smoothing delay, and link bandwidth for lossy smoothing schedules. Specifically, this means that if two of these parameters are given, we can precisely calculate the value for the third which minimizes data loss while avoiding resource wastage. The tradeoff is accomplished by a simple generic algorithm, that allows one some freedom in choosing which data to discard. This algorithm is very easy to implement both at the server and at the client, and it enjoys the nice property that only the server decides which data to discard, and the client needs only to reconstruct the stream.In a second set of results we study the case where different parts of the data have different importance, modeled by assigning a real weight to each packet in the stream. For this setting we use competitive analysis, i.e., we compare the weight delivered by on-line algorithms to the weight of an optimal off-line schedule using the same resources. We prove that a natural greedy algorithm is 4-competitive. We also prove a lower bound of 1.23 on the competitive ratio of any deterministic on-line algorithm. Finally, we give a few experimental results which seem to indicate that smoothing is very effective in practice, and that the greedy algorithm performs very well in the weighted case.Received: 21 November 2001, Accepted: 6 November 2003, Published online: 6 February 2004Research supported in part by Israel Ministry of Science. An extended abstract of this paper appeared in Proc. 19th ACM Symp. on Principles of Distributed Computing, July 2000.  相似文献   
57.
Projects are value creation mechanisms for organizations. In this paper, we build on service-dominant logic theory to theorize how value is perceived and co-created by service providers and clients in professional service projects. From two studies, we found that for service providers to create their value, particularly non-monetary value (e.g., enhanced reputation), client values (e.g., solving a business problem) must first be generated. The results further highlight the importance of reciprocal interactions between service providers and their clients in co-creating value for both parties. Service providers' professional knowledge and competence and their clients' levels of professional knowledge and motivation to interact are critical to enable effective interactions. However, the influence of service providers' professional ethics and clients' trust in professionals on project value co-creation is more complex than theoretically predicted. This paper advances the project value creation literature by providing a more holistic view of what value means for different stakeholders, how it is created, and by whom.  相似文献   
58.
The learning-based automated Assume–Guarantee reasoning paradigm has been applied in the last few years for the compositional verification of concurrent systems. Specifically, L* has been used for learning the assumption, based on strings derived from counterexamples, which are given to it by a model-checker that attempts to verify the Assume–Guarantee rules. We suggest three optimizations to this paradigm. First, we derive from each counterexample multiple strings to L*, rather than a single one as in previous approaches. This small improvement saves candidate queries and hence model-checking runs. Second, we observe that in existing instances of this paradigm, the learning algorithm is coupled weakly with the teacher. Thus, the learner completely ignores the details of the internal structure of the system and specification being verified, which are available already to the teacher. We suggest an optimization that uses this information in order to avoid many unnecessary membership queries (it reduces the number of such queries by more than an order of magnitude). Finally, we develop a method for minimizing the alphabet used by the assumption, which reduces the size of the assumption and the number of queries required to construct it. We present these three optimizations in the context of verifying trace containment for concurrent systems composed of finite state machines. We have implemented our approach in the ComFoRT tool, and experimented with real-life examples. Our results exhibit an average speedup of between 4 to 11 times, depending on the Assume–Guarantee rule used and the set of activated optimizations. This research was supported by the Predictable Assembly from Certifiable Components (PACC) initiative at the Software Engineering Institute, Pittsburgh.  相似文献   
59.
Crambin, a small hydrophobic protein (4.7 kDa and 46 residues),has been successfully expressed in Escherichia coli from anartificial, synthetic gene. Several expression systems wereinvestigated. Ultimately, crambin was successfully expressedas a fusion protein with the maltose binding protein, whichwas purified by affinity chromatography. Crambin expressed asa C-terminal domain was then cleaved from the fusion proteinwith Factor Xa protease and purified. Circular dichroism spectroscopyand amino acid analysis suggested that the purified materialwas identical to crambin isolated from seed. For positive identificationthe protein was crystallized from an ethanol–water solution,by a novel method involving the inclusion of phospholipids inthe crystallization buffer, and then subjected to crystallographicanalysis, Diffraction data were collected at the Brookhavensynchrotron (beamline-X12C) to a resolution of 1.32 Åat 150 K. The structure, refined to an R value of 9.6%, confirmedthat the cloned protein was crambin. The availability of clonedcrambin will allow site-specific mutagenesis studies to be performedon the protein known to the highest resolution.  相似文献   
60.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号