首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   180篇
  免费   5篇
  国内免费   1篇
电工技术   3篇
化学工业   34篇
金属工艺   2篇
机械仪表   3篇
建筑科学   6篇
能源动力   3篇
轻工业   7篇
石油天然气   5篇
无线电   9篇
一般工业技术   27篇
冶金工业   15篇
原子能技术   1篇
自动化技术   71篇
  2024年   1篇
  2023年   1篇
  2022年   1篇
  2021年   2篇
  2020年   5篇
  2019年   2篇
  2018年   3篇
  2017年   5篇
  2016年   3篇
  2015年   2篇
  2014年   7篇
  2013年   14篇
  2012年   4篇
  2011年   5篇
  2010年   10篇
  2009年   10篇
  2008年   8篇
  2007年   12篇
  2006年   9篇
  2005年   7篇
  2004年   10篇
  2003年   7篇
  2002年   2篇
  2001年   3篇
  1999年   1篇
  1998年   2篇
  1996年   2篇
  1995年   1篇
  1994年   2篇
  1993年   3篇
  1992年   4篇
  1991年   2篇
  1990年   2篇
  1989年   4篇
  1988年   3篇
  1987年   2篇
  1986年   3篇
  1985年   6篇
  1984年   1篇
  1983年   2篇
  1982年   2篇
  1980年   2篇
  1978年   1篇
  1977年   2篇
  1976年   1篇
  1975年   2篇
  1972年   1篇
  1971年   1篇
  1969年   1篇
排序方式: 共有186条查询结果,搜索用时 15 毫秒
11.
The weak form of the Efficient Market Hypothesis (EMH) states that current market price reflects fully the information from past prices and rules out prediction based on price data alone. No recent test of time series of stock returns rejects this weak-form hypothesis. This research offers another test of the weak form of the EHM that leads to different conclusions for some time series.The stochastic complexity of a time series is a measure of the number of bits needed to represent and reproduce the information in the time series. In an efficient market, compression of the time series is not possible, because there are no patterns and the stochastic complexity is high. In this research, Rissanen's context tree algorithm is used to identify recurring patterns in the data, and use them for compression. The weak form of the EMH is tested for 13 international stock indices and for all the stocks that comprise the Tel-Aviv 25 index (TA25), using sliding windows of 50, 75, and 100 consecutive daily returns. Statistically significant compression is detected in ten of the international stock index series. In the aggregate, 60% to 84% of the TA25 stocks tested demonstrate compressibility beyond randomness. This indicates potential market inefficiency.  相似文献   
12.
Many interactions between searching agents and their elusive targets are composed of a succession of steps, whether in the context of immune systems, predation or counterterrorism. In the simplest case, a two-step process starts with a search-and-hide phase, also called a hide-and-seek phase, followed by a round of pursuit–escape. Our aim is to link these two processes, usually analysed separately and with different models, in a single game theory context. We define a matrix game in which a searcher looks at a fixed number of discrete locations only once each searching for a hider, which can escape with varying probabilities according to its location. The value of the game is the overall probability of capture after k looks. The optimal search and hide strategies are described. If a searcher looks only once into any of the locations, an optimal hider chooses it''s hiding place so as to make all locations equally attractive. This optimal strategy remains true as long as the number of looks is below an easily calculated threshold; however, above this threshold, the optimal position for the hider is where it has the highest probability of escaping once spotted.  相似文献   
13.
The determination of the time averages of continuous functions or discrete time sequences is important for various problems in physics and engineering, and the generalized final‐value theorems of the Laplace and z‐transforms, relevant to functions and sequences not having a limit at infinity, but having a well‐defined average, can be very helpful in this determination. In the present contribution, we complete the proofs of these theorems and extend them to more general time functions and sequences. Besides formal proofs, some simple examples and heuristic and pedagogical comments on the physical nature of the limiting processes defining the averaging are given. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   
14.
Web applications can be classified as hybrids between hypermedia and information systems. They have a relatively simple distributed architecture from the user viewpoint, but a complex dynamic architecture from the designer viewpoint. They need to respond to operation by an unlimited number of heterogeneously skilled users, address security and privacy concerns, access heterogeneous, up-to-date information sources, and exhibit dynamic behaviors that involve such processes as code transferring. Common system development methods can model some of these aspects, but none of them is sufficient to specify the large spectrum of Web application concepts and requirements. This paper introduces OPM/Web, an extension to the Object-Process Methodology (OPM) that satisfies the functional, structural and behavioral Web-based information system requirements. The main extensions of OPM/Web are adding properties of links to express requirements, such as those related to encryption; extending the zooming and unfolding facilities to increase modularity; cleanly separating declarations and instances of code to model code transferring; and adding global data integrity and control constraints to express dependence or temporal relations among (physically) separate modules. We present a case study that helps evaluate OPM/Web and compare it to an extension of the Unified Modeling Language (UML) for the Web application domain.  相似文献   
15.
The problem of defining vector space operations on fuzzy and probability vectors is discussed. It is shown that such a definition is equivalent to choosing a 1-1 and onto mapping from the unit interval into the real axis. Although such a mapping cannot be continuous, it is suggested that under certain approximations a continuous mapping can be chosen. A characterization of some useful mappings with applications to image processing is also given.  相似文献   
16.
17.
A mechanized verification environment made up of theories over the deductive mechanized theorem prover PVS is presented, which allows taking advantage of the convenient computations method. This method reduces the conceptual difficulty of proving a given property for all the possible computations of a system by separating two different concerns: (1) proving that special convenient computations satisfy the property, and (2) proving that every computation is related to a convenient one by a relation which preserves the property. The approach is especially appropriate for applications in which the first concern is trivial once the second has been shown, e.g., where the specification itself is that every computation reduces to a convenient one. Two examples are the serializability of transactions in distributed databases, and sequential consistency of distributed shared memories. To reduce the repetition of effort, a clear separation is made between infrastructural theories to be supplied as a proof environment PVS library to users, and the specification and proof of particular examples. The provided infrastructure formally defines the method in its most general way. It also defines a computation model and a reduction relation—the equivalence of computations that differ only in the order of finitely many independent operations. One way to prove that this relation holds between every computation and some convenient one involves the definition of a measure function from computations into a well-founded set. Two possible default measures, which can be applied in many cases, are also defined in the infrastructure, along with useful lemmas that assist in their usage. We show how the proof environment can be used, by a step-by-step explanation of an application example.  相似文献   
18.
19.
Debugging is one of the most time-consuming activities in program design. Work on automatic debugging has received a great deal of attention and there are a number of symposiums dedicated to this field. Automatic debugging is usually invoked when a test fails in one situation, but succeeds in another. For example, a test fails in one version of the program (or scheduler), but succeeds in another. Automatic debugging searches for the smallest difference that causes the failure. This is very useful when working to identify and fix the root cause of the bug.A new testing method instruments concurrent programs with schedule-modifying instructions to reveal concurrent bugs. This method is designed to increase the probability of concurrent bugs (such as races, deadlocks) appearing. This paper discusses integrating this new testing technology with automatic debugging. Instead of just showing that a bug exists, we can pinpoint its location by finding the minimal set of instrumentations that reveal the bug.In addition to explaining a methodology for this integration, we show an AspectJ-based implementation. We discuss the implementation in detail as it both demonstrates the advantage of the adaptability of open source tools and how our specific change can be used for other testing tools.  相似文献   
20.
We define the convexity rank of a set of points to be the portion of mutually visible pairs of points out of the total number of pairs. Based on this definition of weak convexity, we introduce a spectral method that decomposes a given shape into weakly convex regions. The decomposition is applied without explicitly measuring the convexity rank. The method merely amounts to a spectral clustering of a matrix representing the all‐pairs line of sight. Our method can be directly applied on an oriented point cloud and does not require any topological information, nor explicit concavity or convexity measures. We demonstrate the efficiency of our algorithm on a large number of examples and compare them qualitatively with competitive approaches.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号