首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   19篇
  免费   0篇
建筑科学   1篇
能源动力   2篇
无线电   1篇
冶金工业   2篇
自动化技术   13篇
  2017年   1篇
  2014年   1篇
  2010年   2篇
  2009年   1篇
  2005年   1篇
  2004年   1篇
  2002年   1篇
  2000年   3篇
  1999年   1篇
  1994年   1篇
  1992年   2篇
  1988年   2篇
  1986年   1篇
  1985年   1篇
排序方式: 共有19条查询结果,搜索用时 406 毫秒
11.
The Lake Wabamun area, in Alberta, is unique within Canada as there are four coal-fired power plants within a 500 km2 area. Continuous monitoring of ambient total gaseous mercury (TGM) concentrations in the Lake Wabamun area was undertaken at two sites, Genesee and Meadows. The data were analyzed in order to characterise the effect of the coal-fired power plants on the regional TGM. Mean concentrations of 1.57 ng/m3 for Genesee and 1.50 ng/m3 for Meadows were comparable to other Canadian sites. Maximum concentrations of 9.50 ng/m3 and 4.43 ng/m3 were comparable to maxima recorded at Canadian sites influenced by anthropogenic sources. The Genesee site was directly affected by the coal-fired power plants with the occurrence of northwest winds, and this was evident by episodes of elevated TGM, NOx and SO2 concentrations. NOx/TGM and SO2/TGM ratios of 21.71 and 19.98 µg/ng, respectively, were characteristic of the episodic events from the northwest wind direction. AERMOD modeling predicted that coal-fired power plant TGM emissions under normal operating conditions can influence hourly ground-level concentrations by 0.46-1.19 ng/m3. The effect of changes in coal-fired power plant electricity production on the ambient TGM concentrations was also investigated, and was useful in describing some of the episodes.  相似文献   
12.
To use graphics efficiently in an automatic report generation system, one has to model messages and how they go from the writer (intention) to the reader (interpretation). This paper describes PostGraphe, a system which generates a report integrating graphics and text from a set of writer's intentions. The system is given the data in tabular form as might be found in a spreadsheet; also input is a declaration of the types of values in the columns of the table. The user then indicates the intentions to be conveyed in the graphics (e.g., compare two variables or show the evolution of a set of variables) and the system generates a report in LATEX with the appropriate PostScript graphic files. PostGraphe uses the same information to generate the accompanying text that helps the reader to focus on the important points of the graphics. We also describe how these ideas have been embedded to create a new Chart Wizard for Microsoft Excel. Received 20 August 1998 / Revised 2 September 1999 / Accepted 27 September 1999  相似文献   
13.
This paper describes the coupling of logic programming with Icon, which is a programming language aimed at string processing. Icon and Prolog have many similarities and their integration is feasible and desirable because the weaknesses of one can be compensated for by the strengths of the other. In our case, a Prolog interpreter was written as an Icon procedure that can be linked and called by an Icon program. This interpreter deals with all Icon data types and can be called in the context of the goal-directed evaluation of Icon. We give an example showing the power of this symbiosis between these two languages where a Prolog call in Icon is a generator and an Icon call in a Prolog clause is a built-in predicate.  相似文献   
14.
A secondary and tertiary structure editor for nucleic acids   总被引:1,自引:0,他引:1  
A major difficulty in the evaluation of secondary and tertiary structures of nucleic acids is the lack of convenient methods for their construction and representation. As a first step in a study of the symbolic representation of biopolymers, we report the development of a structure editor written in Pascal, permitting model construction on the screen of a personal computer. The program calculates energies for helical regions, allows user-defined helices and displays the secondary structure of a nucleic acid based on a user-selected set of helices. Screen and printer outputs can be in the form of a backbone or the letters of the primary sequence. The molecule can then be displayed in a format which simulates its three-dimensional structure. Using appropriate glasses, the molecule can be viewed on the screen in three dimensions. Other options include the manipulation of helices and single-stranded regions which results in changes in the spatial relationship between different regions of the molecule. The editor requires an IBM or compatible PC, 640 kbyte memory and a medium or high resolution graphics card.  相似文献   
15.
Reviews the book, Vocabulaire de sciences cognitives by O. Houdé, D. Kayser, O. Koenig, J. Proust, and F. Rastier (1998). The Vocabulaire de sciences cognitives contains 130 alphabetically ordered entries, each entry corresponding to a different word or expression used in cognitive science. The entries are treated from the point of view of each of the five main disciplines contributing to cognitive science: artificial intelligence, neuroscience, linguistics, philosophy, and psychology. The texts concerning a given entry form distinct sections labeled with the name of the discipline concerned. Numerous cross-references to related entries are given. Overall, the Vocabulaire contains about 200 different texts, a third of which have been authored by the members of the editorial board, each of whom is specialized in one of the disciplines mentioned; the remaining texts were written by 54 other authors from these various fields. Of the 130 entries, only 2 very pivotal terms in cognitive science (FUNCTION and REPRESENTATION) receive complete multidisciplinary treatment; 15 entries referring mostly to major cognitive functions (e.g., LANGUAGE, LEARNING, MEMORY, PERCEPTION, REASONING) are covered by three or four disciplines, and 31 others receive a bidisciplinary treatment. Analysis of the 144 pairs of disciplines found in these 48 entries shows the neighborhood among disciplines to be fairly evenly distributed except in the case of psychology and neuroscience, whose greater conceptual proximity is explained by the fact that most neuroscience texts have been written by neuropsychologists. The contributions of the various disciplines were reviewed by one specialist and by at least one nonspecialist. Although predictably more critical, the specialists did not identify many serious problems or errors. However, the reviewers expressed some reservations concerning the choice of the terms deemed worth of an entry, the limited number of disciplines contributing to some entries, the variable length and nature of the texts, as well as the exaggerated place sometimes given to secondary research. If the Vocabulaire de sciences cognitives does not constitute a monumental achievement, it is nonetheless an impressive piece of work, especially considering the breadth and state of the domain. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   
16.
As basic as bilingual concordancers may appear, they are some of the most widely used computer-assisted translation tools among professional translators. Nevertheless, they still do not benefit from recent breakthroughs in machine translation. This paper describes the improvement of the commercial bilingual concordancer TransSearch in order to embed a word alignment feature. The use of statistical word alignment methods allows the system to spot user query translations, and thus the tool is transformed into a translation search engine. We describe several translation identification and postprocessing algorithms that enhance the application. The excellent results obtained using a large translation memory consisting of 8.3 million sentence pairs are confirmed via human evaluation.  相似文献   
17.
This paper describes a new approach for debugging lazy functional languages. It rests on the fact that a functional program is the transformation of an expression; one debugs a program by investigating the syntactic form of the expression and by stopping the reduction process at given points. We show what problems are involved and our approach to solving them in a prototype implementation.  相似文献   
18.
This paper describes a way of expressing λ-expressions (which produce closures) in terms of ε-expressions (λ-expressions containing only local and global variable references) and calls to an interactive compiler that compiles ε-expressions. This point of view is an interesting way of describing the semantics of λ-expressions and closure generation. It also leads to an efficient closure implementation both in time and space. A closure is uniformly represented as a piece of code instead of a compound object containing a code and environment pointer. This method can also be used to simulate closures in conventional dialects of Lisp  相似文献   
19.
This paper describes and evaluates a parallel program for determining the three-dimensional structure of nucleic acids. A parallel constraint satisfaction algorithm is used to search a discrete space of shapes. Using two realistic data sets, we compare a previous sequential version of the program written in Miranda to the new sequential and parallel versions written in C, Scheme, and Multilisp, and explain how these new versions were designed to attain good absolute performance. Critical issues were the performance of floating-point operations, garbage collection, load balancing, and contention for shared data. We found that speedup was dependent on the data set. For the first data set, nearly linear speedup was observed for up to 64 processors whereas for the second the speedup was limited to a factor of 16.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号