首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   15篇
  免费   0篇
无线电   2篇
原子能技术   1篇
自动化技术   12篇
  2021年   1篇
  2017年   1篇
  2013年   1篇
  2011年   2篇
  2008年   1篇
  2007年   1篇
  2006年   1篇
  2005年   1篇
  2004年   1篇
  2002年   1篇
  2001年   1篇
  2000年   1篇
  1997年   1篇
  1996年   1篇
排序方式: 共有15条查询结果,搜索用时 78 毫秒
1.
Wong and Poon [1] showed that Chow and Liu's tree dependence approximation can be derived by minimizing an upper bound of the Bayes error rate. Wong and Poon's result was obtained by expanding the conditional entropy H(w|X). We derive the correct expansion of H(w|X) and present its implication.  相似文献   
2.
This paper presents the concept and formulation of a signed real measure of regular languages for analysis of discrete-event supervisory control systems. The measure is constructed based upon the principles of language theory and real analysis for quantitative evaluation and comparison of the controlled behaviour for discrete-event automata. The marked (i.e. accepted) states of finite-state automata are classified in different categories such that the event strings terminating at good and bad marked states have positive and negative measures, respectively. In this setting, a controlled language attempts to disable as many bad strings as possible and as few good strings as possible. Different supervisors may achieve this goal in different ways and generate a partially ordered set of controlled languages. The language measure creates a total ordering on the performance of the controlled languages, which provides a precise quantitative comparison of the controlled plant behaviour under different supervisors. Total variation of the language measure serves as a metric for the space of sublanguages of the regular language.  相似文献   
3.
4.
An intelligent multi-user mechanism has been prototyped at the Information System Collaboratory of the Pennsylvania State University, which is capable of resolving global queries with differing and overlapping information needs, spatial scalability and temporal assumptions. The sources of information for this prototype are mechanical damage monitoring sensors embedded in equipment at plant sites, on-board ships or aircrafts, archived historical and diagnostic databases like those available through NALCOMIS (NAVMASSO document J-004 EM-001C, 1995) logistics and maintenance databases at depots, interactive electronic technical manuals stored in databases, dynamic models of damage, and models of operational performance. The concept-of-operation includes mobile access to this information by equipment maintainers on-board ships, aircrafts and other mobile platforms. Real-time interoperation of these system components and databases, under dynamic equipment operating conditions of thermo-mechanical and environmental stress, requires complex interactions of internal representations of sensor data, performance requirements, resources and equipment models, with rich semantics. To support such interactions, following the work of Bright, Hurson and Pakzad (Bright, Hurson, and Pakzad, Transactions on Database Systems, Vol. 19, No. 2, pp. 212–253, 1994) local schema terms of available data sources are organized as the leaf nodes in a semantic network of metadata. The physical nodes of the network are partitioned into a top-down multi-level search control structure of increasing precision and decreasing semantic aggregation. Each physical node supports search through all lower layers of metadata in connected tree configurations. The resulting multilayered semantic network is modeled as a Thesaurus of terms T and relationships R. A relationship in R may be crisp or fuzzy. The DTIC (Defense Technical Information Center) thesaurus for equipment maintenance was used as a starting point in this work. It was further enhanced by application specific terms and endowed with a distance function. This distance function is used to formulate user adaptable Graphic User Interfaces (GUI) for making quality of service tradeoffs in the resolution of global queries.Step-by-step construction of the thesaurus as a multilevel metadata network, its scalability, dynamic adaptation through usage, and tolerance of semantic imprecision in query resolution are discussed in this paper. Furthermore, performance metrology for evaluating quality of service in global query resolution is also developed (Phoha, in Proceedings of the NIST Workshop on Advancing Measurements and Testing for Information Technology, Gaithersburg, MD, Oct. 1998).This work was funded by DARPA for the past four years under grant DE-FC36-94G010064, for establishing a National Information Infrastructure Testbed for Electronic Commerce in equipment health monitoring, failure diagnosis and prognosis services.  相似文献   
5.
We propose a Monte Carlo approach to attain sufficient training data, a splitting method to improve effectiveness, and a system composed of parallel decision trees (DTs) to authenticate users based on keystroke patterns. For each user, approximately 19 times as much simulated data was generated to complement the 387 vectors of raw data. The training set, including raw and simulated data, is split into four subsets. For each subset, wavelet transforms are performed to obtain a total of eight training subsets for each user. Eight DTs are thus trained using the eight subsets. A parallel DT is constructed for each user, which contains all eight DTs with a criterion for its output that it authenticates the user if at least three DTs do so; otherwise it rejects the user. Training and testing data were collected from 43 users who typed the exact same string of length 37 nine consecutive times to provide data for training purposes. The users typed the same string at various times over a period from November through December 2002 to provide test data. The average false reject rate was 9.62% and the average false accept rate was 0.88%.  相似文献   
6.
This paper addresses real-time decision-making associated with acoustic measurements for online surveillance of undersea targets moving over a deployed sensor network. The underlying algorithm is built upon the principles of symbolic dynamic filtering for feature extraction and formal language theory for decision-making, where the decision threshold for target detection is estimated based on time series data collected from an ensemble of passive sonar sensors that cover the anticipated tracks of moving targets. Adaptation of the decision thresholds to the real-time sensor data is optimal in the sense of weighted linear least squares. The algorithm has been validated on a simulated sensor-network test-bed with time series data from an ensemble of target tracks.  相似文献   
7.
Phoha  V. 《Computer》1997,30(10):97-98
While there is no universally recognized standard for software documentation, there is a standard for documenting engineering and scientific software. Developed by the American National Standards Institute (ANSI) and the American Nuclear Society (ANS) in 1995, it is called the ANSI/ANS 10.3-1995 Standard for Documentation of Computer Software. The standard provides a flexible, robust framework for documentation needs. One of its goals is to encourage better communication between developer and user and to facilitate effective selection, usage, transfer, conversion and modification of computer software. The standard is not a rigid set of specifications but a guide that can apply to most software projects intended for internal or external use. While the standard cannot cover all documentation problems, it is a good starting point, even for the most complex software. Similarly, while the standard provides recommendations for documenting scientific and engineering software, it doesn't offer guidance for online monitoring, control or safety systems, and doesn't specifically address the unique requirements of consumer-oriented software. As a general guideline for clear, well-organized documentation, however, the ANSI/ANS 10.3-1995 standard can serve as a place for developers to begin a documentation methodology. The standard is fairly comprehensive, and it allows for individual developer differences and unique software documentation problems  相似文献   
8.
To maintain quality of service, some heavily trafficked Web sites use multiple servers, which share information through a shared file system or data space. The Andrews file system (AFS) and distributed file system (DFS), for example, can facilitate this sharing. In other sites, each server might have its own independent file system. Although scheduling algorithms for traditional distributed systems do not address the special needs of Web server clusters well, a significant evolution in the computational approach to artificial intelligence and cognitive engineering shows promise for Web request scheduling. Not only is this transformation - from discrete symbolic reasoning to massively parallel and connectionist neural modeling - of compelling scientific interest, but also of considerable practical value. Our novel application of connectionist neural modeling to map Web page requests to Web server caches maximizes hit ratio while load balancing among caches. In particular, we have developed a new learning algorithm for fast Web page allocation on a server using the self-organizing properties of the neural network (NN).  相似文献   
9.
Due to complex nature of resonance region interactions, significant effort has been devoted to quantify the resonance parameter uncertainty information through covariance matrices. Statistical uncertainties arising from measurements contribute only to the diagonal elements of the covariance matrix, but the off-diagonal contributions arise from multiple sources like systematic errors in cross-section measurement, correlation due to nuclear reaction formalism, etc. All the efforts have so far been devoted to minimize the statistical uncertainty by repeated measurements but systematic uncertainty cannot be reduced by mere repetition. The computer codes like SAMMY and KALMAN so far developed to generate resonance parameter covariance have no provision to improve upon the highly correlated experimental data and hence reduce the systematic uncertainty. We propose a new approach called entropy based information theory to reduce the systematic uncertainty in the covariance matrix element wise so that resonance parameters with minimum systematic uncertainty can be simulated. Our simulation approach will aid both the experimentalists and the evaluators to design the experimental facility with minimum systematic uncertainty and thus improve the quality of measurement and the associated instrumentation. We demonstrate, the utility of our approach in simulating the resonance parameters of Uranium-235 and Plutonium-239 with reduced systematic uncertainty.  相似文献   
10.
Heterogeneous and aggregate vectors are the two widely used feature vectors in fixed text keystroke authentication. In this paper, we address the question “Which vectors, heterogeneous, aggregate, or a combination of both, are more discriminative and why?” We accomplish this in three ways - (1) by providing an intuitive example to illustrate how aggregation of features inherently reduces discriminability; (2) by formulating “discriminability” as a non-parametric estimate of Bhattacharya distance, we show theoretically that the discriminability of a heterogeneous vector is higher than an aggregate vector; and (3) by conducting user recognition experiments using a dataset containing keystrokes from 33 users typing a 32-character reference text, we empirically validate our theoretical analysis. To compare the discriminability of heterogeneous and aggregate vectors with different combinations of keystroke features, we conduct feature selection analysis using three methods: (1) ReliefF, (2) correlation based feature selection, and (3) consistency based feature selection. Results of feature selection analysis reinforce the findings of our theoretical analysis.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号