首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We propose a framework to study the computational complexity of definable relations on a structure. Many of the notions we discuss are old, but the viewpoint is new. We believe that all the pieces fit together smoothly under this new point of view. We also survey related results in the area. More concretely, we study the space of sequences of relations over a given structure. On this space, we develop notions of c.e.-ness, reducibility, join and jump. These notions are equivalent to other notions studied in other settings. We explain the equivalences and differences between these notions.  相似文献   

2.
The objective of this paper is to propose two morphological table verification techniques, both based on the probabilistic rough sets approach. Fundamental aspects of the theories of rough sets and probabilistic approximate classification are discussed which have been adapted for the analysis of a morphological table. These two related theories were used to develop experimental computer programs with the ability to learn from examples, for the analysis of dependencies between variables in a given morphological table. The developed verification programs were used for analysis of two morphological tables of different complexity and development histories. The first test was for a nine-variable table, which has been under development for approximately 15 years and has undergone many changes and corrections. The other test was performed for a 42-variable table developed only recently. The diagrams prepared reveal the learning character of the verification process and the differences between learning based on the theory of rough sets and the probabilistic theory of approximate classification. The results of the first test confirmed the expected comparable importance of individual variables, while in the second test variables were of significantly different importance and many variables could be eliminated. The proposed verification techniques still have an experimental character, but the initial results are promising. These results demonstrate that the proposed verification approach and the developed computer tools are of potential usefulness.  相似文献   

3.
Scholarly discourse on “disruptive technologies” has been strongly influenced by disruptive innovation theory. This theory is tailored for analyzing disruptions in markets and business. It is of limited use, however, in analyzing the broader social, moral and existential dynamics of technosocial disruption. Yet these broader dynamics should be of great scholarly concern, both in coming to terms with technological disruptions of the past and those of our current age. Technologies can disrupt social relations, institutions, epistemic paradigms, foundational concepts, values, and even the nature of human cognition and experience – domains of disruption that are largely neglected in existing discourse on disruptive technologies. Accordingly, this paper seeks to reorient scholarly discussion around a broader notion of technosocial disruption. This broader notion raises three foundational questions. First, how can technosocial disruption be conceptualized in a way that clearly sets it apart from the disruptive innovation framework? Secondly, how does the notion of technosocial disruption relate to the concordant notions of “disruptor” and “disruptiveness”? Thirdly, can we advance criteria to assess the “degree of social disruptiveness” of different technologies? The paper clarifies these questions and proposes an answer to each of them. In doing so, it advances “technosocial disruption” as a key analysandum for future scholarship on the interactions between technology and society.  相似文献   

4.
We consider a family of 1-median location problems on a tree network where the vertex weights are ranges rather than point values. We define a new framework for making sound decisions under uncertainty which is primarily based on the interplay between the points in the tree and the data that induce the family of problems. An important feature of this framework is that it provides a novel understanding of the problem under uncertainty by collectively handling all possible realizations of the weights. The key element is the notion of a region of a optimality. Based on the regions of optimality, we define three optimality criteria and give low-order polynomial methods to compute the associated solution sets.  相似文献   

5.
Gaussian synapse ANNs in multi- and hyperspectral image data analysis   总被引:1,自引:0,他引:1  
A new type of artificial neural network is used to identify different crops and ground elements from hyperspectral remote sensing data sets. These networks incorporate Gaussian synapses and are trained using a specific algorithm called Gaussian synapse back propagation described here. Gaussian synapses present an intrinsic filtering ability that permit concentrating on what is relevant in the spectra and automatically discard what is not. The networks are structurally adapted to the problem complexity as superfluous synapses and/or nodes are implicitly eliminated by the training procedure, thus pruning the network to the required size straight from the training set. The fundamental difference between the present proposal and other ANN topologies using Gaussian functions is that the latter use these functions as activation functions in the nodes, while in our case, they are used as synaptic elements, allowing them to be easily shaped during the training process to produce any type of n-dimensional discriminator. This paper proposes a multi- and hyperspectral image segmenter that results from the parallel and concurrent application of several of these networks providing a probability vector that is processed by a decision module. Depending on the criteria used for the decision module, different perspectives of the same image may be obtained. The resulting structure offers the possibility of resolving mixtures, that is, carrying out a spectral unmixing process in a very straightforward manner.  相似文献   

6.
This paper presents a simplified analysis (model and failure criteria) for predicting the stress-strain responce of cross-ply fiber-reinforced ceramic composite laminates under quasi-static loading and unloading conditions. The model formulation is an extension of the modified shear-lag theory previously introduced by the authors for analyzing unidirectional laminates for the same loading conditions. The present formulation considers a general damage state consisting of matrix cracking in both the transverse and longitudinal plies, as well as fiber failure. These damage modes are modeled by a set of failure criteria with the minimum reliance on empirical data, and can be easily employed in a variety of numerical or analytical methods. The criteria used to estimate the extent of matrix cracking and interfacial debonding are closed-form and require the basic material properties. The failure criterion for fiber failure requires a priori knowledge of a single empirical constant. This parameter, however, may be determined without microscopic investigation of the laminate microstructure. The results from the present simplified analysis match well with the experimental data.The U.S. Government right to retain a non-exclusive royalty-free license in and to any copyright is acknowledged.  相似文献   

7.
Increasing visible light absorption of classic wide‐bandgap photocatalysts like TiO2 has long been pursued in order to promote solar energy conversion. Modulating the composition and/or stoichiometry of these photocatalysts is essential to narrow their bandgap for a strong visible‐light absorption band. However, the bands obtained so far normally suffer from a low absorbance and/or narrow range. Herein, in contrast to the common tail‐like absorption band in hydrogen‐free oxygen‐deficient TiO2, an unusual strong absorption band spanning the full spectrum of visible light is achieved in anatase TiO2 by intentionally introducing atomic hydrogen‐mediated oxygen vacancies. Combining experimental characterizations with theoretical calculations reveals the excitation of a new subvalence band associated with atomic hydrogen filled oxygen vacancies as the origin of such band, which subsequently leads to active photo‐electrochemical water oxidation under visible light. These findings could provide a powerful way of tailoring wide‐bandgap semiconductors to fully capture solar light.  相似文献   

8.
Unsupervised clustering and clustering validity are used as essential instruments of data analytics. Despite clustering being realized under uncertainty, validity indices do not deliver any quantitative evaluation of the uncertainties in the suggested partitionings. Also, validity measures may be biased towards the underlying clustering method. Moreover, neglecting a confidence requirement may result in over-partitioning. In the absence of an error estimate or a confidence parameter, probable clustering errors are forwarded to the later stages of the system. Whereas, having an uncertainty margin of the projected labeling can be very fruitful for many applications such as machine learning. Herein, the validity issue was approached through estimation of the uncertainty and a novel low complexity index proposed for fuzzy clustering. It involves only uni-dimensional membership weights, regardless of the data dimension, stipulates no specific distribution, and is independent of the underlying similarity measure. Inclusive tests and comparisons returned that it can reliably estimate the optimum number of partitions under different data distributions, besides behaving more robust to over partitioning. Also, in the comparative correlation analysis between true clustering error rates and some known internal validity indices, the suggested index exhibited the highest strong correlations. This relationship has been also proven stable through additional statistical acceptance tests. Thus the provided relative uncertainty measure can be used as a probable error estimate in the clustering as well. Besides, it is the only method known that can exclusively identify data points in dubiety and is adjustable according to the required confidence level.  相似文献   

9.
Time series classification (TSC) has attracted various attention in the community of machine learning and data mining and has many successful applications such as fault detection and product identification in the process of building a smart factory. However, it is still challenging for the efficiency and accuracy of classification due to complexity, multi-dimension of time series. This paper presents a new approach for time series classification based on convolutional neural networks (CNN). The proposed method contains three parts: short-time gap feature extraction, multi-scale local feature learning, and global feature learning. In the process of short-time gap feature extraction, large kernel filters are employed to extract the features within the short-time gap from the raw time series. Then, a multi-scale feature extraction technique is applied in the process of multi-scale local feature learning to obtain detailed representations. The global convolution operation with giant stride is to obtain a robust and global feature representation. The comprehension features used for classifying are a fusion of short time gap feature representations, local multi-scale feature representations, and global feature representations. To test the efficiency of the proposed method named multi-scale feature fusion convolutional neural networks (MSFFCNN), we designed, trained MSFFCNN on some public sensors, device, and simulated control time series data sets. The comparative studies indicate our proposed MSFFCNN outperforms other alternatives, and we also provided a detailed analysis of the proposed MSFFCNN.  相似文献   

10.
The problem of blind linear data-symbol estimation and data detection for the air- interface adopting the wide-band direct sequence code division multiple access multiuser multiplexing technique with the promising short-code configurations has been addressed. The superior interference-suppression ability of the code-constrained minimum output energy multipath- component estimation is utilised to develop three code-aided quasi-maximum signal-to-interference and noise ratio (SINR) algorithms. These algorithms operate in such a way as to maximise approximate measures of the output SINR, each having variations, especially in the adaptive implementation, because of different criteria employed. The 'quasi-maximum SINR' nature is because of the approximations employed. Extensive simulations indicate that all of these algorithms significantly outperform the existing code-aided blind linear algorithms at considerably low computational complexity. Moreover their adaptive versions exhibit very high level of desirable trade-off between convergence speed and steady-state performance at further reduced computational complexity.  相似文献   

11.
Decision making in case of medical diagnosis is a complicated process. A large number of overlapping structures and cases, and distractions, tiredness, and limitations with the human visual system can lead to inappropriate diagnosis. Machine learning (ML) methods have been employed to assist clinicians in overcoming these limitations and in making informed and correct decisions in disease diagnosis. Many academic papers involving the use of machine learning for disease diagnosis have been increasingly getting published. Hence, to determine the use of ML to improve the diagnosis in varied medical disciplines, a systematic review is conducted in this study. To carry out the review, six different databases are selected. Inclusion and exclusion criteria are employed to limit the research. Further, the eligible articles are classified depending on publication year, authors, type of articles, research objective, inputs and outputs, problem and research gaps, and findings and results. Then the selected articles are analyzed to show the impact of ML methods in improving the disease diagnosis. The findings of this study show the most used ML methods and the most common diseases that are focused on by researchers. It also shows the increase in use of machine learning for disease diagnosis over the years. These results will help in focusing on those areas which are neglected and also to determine various ways in which ML methods could be employed to achieve desirable results.  相似文献   

12.
A sound disassembly Petri net model for the effective planning of disassembly processes and tasks is outlined. Owing to the unmanageable complexity associated with modelling of the disassembly processes and tasks, it becomes essential to have a more powerful Petri net model developed by incorporating the concepts of expert system, knowledge representation techniques, etc. Disassembly task planning at high and low levels can easily be represented by proposed high- and low-level expert Petri net. An algorithmic approach is also suggested for evaluating the end-of-life values of a product. These values are used to determine an optimal disassembly sequence and it is incorporated in the expert disassembly Petri net. A proposed expert enhanced high-level coloured disassembly Petri net is empowered to express such details vividly. The application of the proposed expert enhanced high-level coloured disassembly Petri net model is demonstrated through the sample disassembly of a flashlight.  相似文献   

13.
Although maximal localization is a basic notion in the consideration of phase-space representations of fields, it has not yet been pursued for general wave fields. We develop measures of spatial and directional spreads for nonparaxial waves in free space. These measures are invariant under translation and rotation and are shown to reduce to the conventional ones when applied to paraxial fields. The associated uncertainty relation sets limits to joint localization in coordinate and frequency space. This relation provides a basis for the definition of a joint localization measure that is analogous to the beam propagation factor (i.e., M2) of paraxial optics. The results are first developed for two-dimensional fields and then generalized to three dimensions.  相似文献   

14.
一种粗模糊神经分类器   总被引:2,自引:0,他引:2  
介绍一种新的粗集编码模糊神经分类器。基于粗集理论的概念,讨论了知识编码、属性简化、分类系统简化的方法;并利用模糊隶属度函数将输入精确信息映射为模糊变量信息,解决分类中病态定义的数据问题和提高系统非线性映射的分类能力;提出了结合系统参数的重要性因子的网络的模糊推理方法和粗模糊神经分类器的网络结构以及有导师的最小平方误差学习训练算法。实现的粗集编码模糊神经分类器具有网络结构空间维数低、学习算法简单、网络训练时间短、非线性特性丰富等优点。  相似文献   

15.
Classification decision tree algorithms have recently been used in pattern-recognition problems. In this paper, we propose a self-designing system that uses the classification tree algorithms and that is capable of recognizing a large number of signals. Preprocessing techniques are used to make the recognition process more effective. A combination of the original, as well as the preprocessed, signals is projected into different transform domains. Enormous sets of criteria that characterize the signals can be developed from the signal representations in these domains. At each node of the classification tree, an appropriately selected criterion is optimized with respect to desirable performance features such as complexity and noise immunity. The criterion is then employed in conjunction with a vector quantizer to divide the signals presented at a particular node in that stage into two approximately equal groups. When the process is complete, each signal is represented by a unique composite binary word index, which corresponds to the signal path through the tree, from the input to one of the terminal nodes of the tree. Experimental results verify the excellent classification accuracy of this system. High performance is maintained for both noisy and corrupt data.  相似文献   

16.
Molecular doping of inorganic semiconductors is a rising topic in the field of organic/inorganic hybrid electronics. However, it is difficult to find dopant molecules which simultaneously exhibit strong reducibility and stability in ambient atmosphere, which are needed for n‐type doping of oxide semiconductors. Herein, successful n‐type doping of SnO2 is demonstrated by a simple, air‐robust, and cost‐effective triphenylphosphine oxide molecule. Strikingly, it is discovered that electrons are transferred from the R3P+? O?σ‐bond to the peripheral tin atoms other than the directly interacted ones at the surface. That means those electrons are delocalized. The course is verified by multi‐photophysical characterizations. This doping effect accounts for the enhancement of conductivity and the decline of work function of SnO2, which enlarges the built‐in field from 0.01 to 0.07 eV and decreases the energy barrier from 0.55 to 0.39 eV at the SnO2/perovskite interface enabling an increase in the conversion efficiency of perovskite solar cells from 19.01% to 20.69%.  相似文献   

17.
As a direct consequence of production systems' digitalization, high‐frequency and high‐dimensional data has become more easily available. In terms of data analysis, latent structures‐based methods are often employed when analyzing multivariate and complex data. However, these methods are designed for supervised learning problems when sufficient labeled data are available. Particularly for fast production rates, quality characteristics data tend to be scarcer than available process data generated through multiple sensors and automated data collection schemes. One way to overcome the problem of scarce outputs is to employ semi‐supervised learning methods, which use both labeled and unlabeled data. It has been shown that it is advantageous to use a semi‐supervised approach in case of labeled data and unlabeled data coming from the same distribution. In real applications, there is a chance that unlabeled data contain outliers or even a drift in the process, which will affect the performance of the semi‐supervised methods. The research question addressed in this work is how to detect outliers in the unlabeled data set using the scarce labeled data set. An iterative strategy is proposed using a combined Hotelling's T2 and Q statistics and applied using a semi‐supervised principal component regression (SS‐PCR) approach on both simulated and real data sets.  相似文献   

18.
Self-assembly process represents one of the most powerful and efficient methods for designing functional nanomaterials. For generating optimal functional materials, understanding the pathway complexity during self-assembly is essential, which involves the aggregation of molecules into thermodynamically or kinetically favored pathways. Herein, a functional perylene diimide (PDI) derivative by introducing diacetylene (DA) chains (PDI-DA) is designed. Temperature control pathway complexity with the evolution of distinct morphology for the kinetic and thermodynamic product of PDI-DA is investigated in detail. A facile strategy of UV-induced polymerization is adopted to trap and capture metastable kinetic intermediates to understand the self-assembly mechanism. PDI-DA showed two kinetic intermediates having the morphology of nanosheets and nanoparticles before transforming into the thermodynamic product having fibrous morphology. Spectroscopic studies revealed the existence of distinct H- and J-aggregates for kinetic and thermodynamic products respectively. The polymerized fibrous PDI-DA displayed reversible switching between J-aggregate and H-aggregate.  相似文献   

19.
M. Belevich 《Acta Mechanica》2005,180(1-4):83-106
Summary Relations between standard three-dimensional fluid mechanics and four-dimensional non-relativistic causal theory of perfect and viscous fluids are studied. Both theories differ in a number of points and these differences are of two levels: conceptual and qualitative. At the same time, the standard theory may be regarded as a limit case of the causal fluid model and this allows one to draw parallels between two sets of notions and ideas. We try to find out the correspondence between such classical notions as time, forces, potential energy, etc. and new ones including events, world-lines, metrics, curvature and so on. The qualitative differences between the two models lead to such side effects as necessity of new formulations for the First and the Second Law of Thermodynamics. These formulations together with discussion are provided.  相似文献   

20.
This work is dedicated to an effective method for investigation of the stability of solutions of ordinary differential equations. The notions of strong Lipschitz stability and strong uniform Lipschitz stability are introduced. Two theorems are proved. The first contains sufficient conditions under which the strong Lipschitz stability of the zero solution of each of the respective limiting equations implies stability of the zero solution of the initial equation. In the second theorem sufficient conditions are given under which from the uniform Lipschitz stability of the zero solution of the initial equation there follows uniform Lipschitz stability of the zero solution of each one of the respective limiting equations  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号