首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   11篇
  免费   0篇
化学工业   3篇
机械仪表   2篇
一般工业技术   1篇
自动化技术   5篇
  2016年   1篇
  2009年   1篇
  2007年   1篇
  2006年   1篇
  2004年   1篇
  2002年   1篇
  1998年   1篇
  1994年   1篇
  1992年   2篇
  1985年   1篇
排序方式: 共有11条查询结果,搜索用时 31 毫秒
1.
The resolution of an optical microscope is considerably less in the direction of the optical axis (z) than in the focal plane (x-y plane). This is true of conventional as well as confocal microscopes. For quantitative microscopy, for instance studies of the three-dimensional (3-D) organization of chromosomes in human interphase cell nuclei, the 3-D image must be reconstructed by a point spread function or an optical transfer function with careful consideration of the properties of the imaging system. To alleviate the reconstruction problem, a tilting device was developed so that several data sets of the same cell nucleus under different views could be registered. The 3-D information was obtained from a series of optical sections with a Zeiss transmission light microscope Axiomat using a stage with a computer-controlled stepping motor for movement in the z-axis. The tilting device on the Axiomat stage could turn a cell nucleus through any desired angle and also provide movement in the x-y direction. The technique was applied to 3-D imaging of human lymphocyte cell nuclei, which were labelled by in situ hybridization with the DNA probe pUC 1.77 (mainly specific for chromosome 1). For each nucleus, 3-D data sets were registered at viewing angles of 0°, 90° and 180°; the volumes and positions of the labelled regions (spots) were calculated. The results also confirm that, in principle, any angle of a 2p geometry can be fixed for data acquisition with a high reproducibility. This indicates the feasibility of axiotomographical microscopy of cell nuclei.  相似文献   
2.
The benefits of collaborative learning, although widely reported, lack the quantitative rigor and detailed insight into the dynamics of interactions within the group, while individual contributions and their impacts on group members and their collaborative work remain hidden behind joint group assessment. To bridge this gap we intend to address three important aspects of collaborative learning focused on quantitative evaluation and prediction of group performance. First, we use machine learning techniques to predict group performance based on the data of member interactions and thereby identify whether, and to what extent, the group’s performance is driven by specific patterns of learning and interaction. Specifically, we explore the application of Extreme Learning Machine and Classification and Regression Trees to assess the predictability of group academic performance from live interaction data. Second, we propose a comparative model to unscramble individual student performances within the group. These performances are then used further in a generative mixture model of group grading as an explicit combination of isolated individual student grade expectations and compared against the actual group performances to define what we coined as collaboration synergy - directly measuring the improvements of collaborative learning. Finally the impact of group composition of gender and skills on learning performance and collaboration synergy is evaluated. The analysis indicates a high level of predictability of group performance based solely on the style and mechanics of collaboration and quantitatively supports the claim that heterogeneous groups with the diversity of skills and genders benefit more from collaborative learning than homogeneous groups.  相似文献   
3.
Studies of the three-dimensional (3-D) organization of cell nuclei are becoming increasingly important for the understanding of basic cellular events such as growth and differentiation. Modern methods of molecular biology, including in situ hybridization and immunofluorescence, allow the visualization of specific nuclear structures and the study of spatial arrangements of chromosome domains in interphase nuclei. Specific methods for labelling nuclear structures are used to develop computerized techniques for the automated analysis of the 3-D organization of cell nuclei. For this purpose, a coordinate system suitable for the analysis of tri-axial ellipsoidal nuclei is determined. High-resolution 3-D images are obtained using confocal scanning laser microscopy. The results demonstrate that with these methods it is possible to recognize the distribution of visualized structures and to obtain useful information regarding the 3-D organization of the nuclear structure of different cell systems.  相似文献   
4.
A heat-effective ‘integrated’ process carried out in one reactor, composed of exothermic oxidative coupling of CH4 over the catalyst fixed bed and endothermic pyrolysis of naphtha injected from the outside to the stream of gaseous coupling products in the hot oxygen-free postcatalytic zone, has been studied. An additivity of the yields of ethylene formed in both component processes was examined under varied operating conditions (type of naphtha fraction, flow rate of reagents and temperature of pyrolysis). A very high degree of additivity of the yields of ethylene and its main coproducts was observed, independently of the relative contribution of the component processes to the integrated process and of applied variations in the process conditions. Evidently, the mutual interactions between the component processes and products were negligible under experimental conditions. © 1998 Society of Chemical Industry  相似文献   
5.
A heat‐effective ‘integrated’ process of C2H4 production, incorporating exothermic oxidative coupling of methane (OCM) carried out in the catalytic section of a flow tubular reactor, and endothermic pyrolysis of naphtha carried out in the postcatalytic section of the same reactor, studied earlier in a small silica reactor, was examined now in a scaled‐up unit with a stainless‐steel (1H18N9T) reactor (volume 400 cm3, Li/MgO catalyst bed 165 cm3). It was demonstrated that depending on the operating conditions, such an integrated process could be realized over a wide range of the relative contribution of the two component processes, leading always to an increase in the C2H4 yield, as compared with OCM or pyrolysis alone. A high degree of additivity of the yields of all products was observed in all cases, independently of the relative contribution of OCM and pyrolysis. Such results indicated that in the scaled‐up unit with a stainless‐steel reactor, the interactions between the component processes and products were only negligible under experimental conditions. The overall balance of CH4, being consumed in OCM and formed in pyrolysis, was negative, equal to zero, or positive, depending on the relative contribution of the component processes. The integrated process could be based, therefore, either on CH4 and naphtha as raw materials or exclusively on naphtha, with the recirculation of the excess of CH4 to the OCM section. Copyright © 2004 Society of Chemical Industry  相似文献   
6.
: A robust character of combining diverse classifiers using a majority voting has recently been illustrated in the pattern recognition literature. Furthermore, negatively correlated classifiers turned out to offer further improvement of the majority voting performance even comparing to the idealised model with independent classifiers. However, negatively correlated classifiers represent a very unlikely situation in real-world classification problems, and their benefits usually remain out of reach. Nevertheless, it is theoretically possible to obtain a 0% majority voting error using a finite number of classifiers at error levels lower than 50%. We attempt to show that structuring classifiers into relevant multistage organisations can widen this boundary, as well as the limits of majority voting error, even more. Introducing discrete error distributions for analysis, we show how majority voting errors and their limits depend upon the parameters of a multiple classifier system with hardened binary outputs (correct/incorrect). Moreover, we investigate the sensitivity of boundary distributions of classifier outputs to small discrepancies modelled by the random changes of votes, and propose new more stable patterns of boundary distributions. Finally, we show how organising classifiers into different structures can be used to widen the limits of majority voting errors, and how this phenomenon can be effectively exploited. Received: 17 November 2000, Received in revised form: 27 November 2001, Accepted: 29 November 2001 ID="A1" Correspondence and offprint requests to: D. Ruta, Applied Computing Research Unit, Division of Computer and Information Systems, University of Paisley, High Street, Paisley PA1 2BE, UK. Email: ruta-ci0@paisley.ac.uk  相似文献   
7.
Despite recent successes and advancements in artificial intelligence and machine learning, this domain remains under continuous challenge and guidance from phenomena and processes observed in natural world. Humans remain unsurpassed in their efficiency of dealing and learning from uncertain information coming in a variety of forms, whereas more and more robust learning and optimisation algorithms have their analytical engine built on the basis of some nature-inspired phenomena. Excellence of neural networks and kernel-based learning methods, an emergence of particle-, swarms-, and social behaviour-based optimisation methods are just few of many facts indicating a trend towards greater exploitation of nature inspired models and systems. This work intends to demonstrate how a simple concept of a physical field can be adopted to build a complete framework for supervised and unsupervised learning methodology. An inspiration for artificial learning has been found in the mechanics of physical fields found on both micro and macro scales. Exploiting the analogies between data and charged particles subjected to gravity, electrostatic and gas particle fields, a family of new algorithms has been developed and applied to classification, clustering and data condensation while properties of the field were further used in a unique visualisation of classification and classifier fusion models. The paper covers extensive pictorial examples and visual interpretations of the presented techniques along with some comparative testing over well-known real and artificial datasets.
Bogdan GabrysEmail:
  相似文献   
8.
The oxidative properties of Li/MgO in the absence of O2 were studied at 730° C using C2H4 as a reducing agent and a multisectional flow stainless steel tubular reactor. Large amounts of CO and H2 were determined. It was demonstrated that Li/MgO exhibited a high oxygen mobility. The exchange capacity was (2.4–4.0) × 1020 oxygen atoms per 1 g of the catalyst and the period of oxygen donation was 150 h or more. After the catalyst reduction, its oxygen transfer ability was fully restored by re-oxidation in the stream of air. Oxidation of C2H4 to CO and H2 was accompanied by its decomposition to C and H2. The ratio H2/C2H4 was found to be 1.91 ± 0.13 independently of the oxidation state of the catalyst, the location of the sampling point along the catalyst bed (and the post-catalytic zone) and the duration of the experiment. The mechanism was discussed.  相似文献   
9.
Genetic algorithms in classifier fusion   总被引:2,自引:0,他引:2  
An intense research around classifier fusion in recent years revealed that combining performance strongly depends on careful selection of classifiers to be combined. Classifier performance depends, in turn, on careful selection of features, which could be further restricted by the subspaces of the data domain. On the other hand, there is already a number of classifier fusion techniques available and the choice of the most suitable method depends back on the selections made within classifier, features and data spaces. In all these multidimensional selection tasks genetic algorithms (GA) appear to be one of the most suitable techniques providing reasonable balance between searching complexity and the performance of the solutions found. In this work, an attempt is made to revise the capability of genetic algorithms to be applied to selection across many dimensions of the classifier fusion process including data, features, classifiers and even classifier combiners. In the first of the discussed models the potential for combined classification improvement by GA-selected weights for the soft combining of classifier outputs has been investigated. The second of the proposed models describes a more general system where the specifically designed GA is applied to selection carried out simultaneously along many dimensions of the classifier fusion process. Both, the weighted soft combiners and the prototype of the three-dimensional fusion–classifier–feature selection model have been developed and tested using typical benchmark datasets and some comparative experimental results are also presented.  相似文献   
10.
A business incurs much higher charges when attempting to win new customers than to retain existing ones. As a result, much research has been invested into new ways of identifying those customers who have a high risk of churning. However, customer retention efforts have also been costing organisations large amounts of resource. In response to these issues, the next generation of churn management should focus on accuracy. A variety of churn management techniques have been developed as a response to the above requirements. The focus of this paper is to review some of the most popular technologies that have been identified in the literature for the development of a customer churn management platform. The advantages and disadvantages of the identified technologies are discussed, and a discussion on the future research directions is offered.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号