首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
KIII model is an olfactory model proposed by W. J. Freeman referring to a physiological structure of mammal??s olfactory system. The KIII model has been applied to kinds of pattern recognition systems, for example, electronic nose, tea classification, etc. However, the dynamics of neurons in the KIII model is given by Hodgkin-Huxley??s second-order differential equation and it consumes a very high computation cost. In this paper, we propose a simplified dynamics of chaotic neuron instead of the Hodgkin-Huxley dynamics at first, and secondly, we propose to use Fourier transformation with high resolution capability to extract features of time series behaviors of internal states of M1 nodes in KIII model instead of the conventional standard deviation method. Furthermore, paying attention to the point that human brain does visual processing as same as olfactory processing in the sense of information processing, a handwriting image recognition problem is treated as a new application field of KIII model. Through the computer simulation of the handwriting character classification, it is shown that the proposed method is useful by the comparison of experiment results with both computation time and recognition accuracy.  相似文献   

2.
In this paper, an optimization approach is adopted to obtain the 12 material parameters used in McGinty’s Model for AL6022 by minimizing the differences between simulation and experimental stress–strain curves. Since the differences between the two stress–strain curves are implicitly related to the change of material parameters, the metamodeling technique is utilized to create explicit, approximate functions of these relationships. Radial basis functions (RBFs), which are shown from previous studies to be effective for both low- and high-order nonlinear responses, are used for the metamodels that are adaptively updated in the optimization work. Two optimization formulation schemes are studied to address the issue of using inaccurate RBF models in optimization. The sampling, metamodeling, and optimization works are performed using the integrated optimization framework HiPPO.  相似文献   

3.
Breakdowns in complex systems often occur as a result of system elements interacting in unanticipated ways. In systems with human operators, human–automation interaction associated with both normative and erroneous human behavior can contribute to such failures. Model-driven design and analysis techniques provide engineers with formal methods tools and techniques capable of evaluating how human behavior can contribute to system failures. This paper presents a novel method for automatically generating task analytic models encompassing both normative and erroneous human behavior from normative task models. The generated erroneous behavior is capable of replicating Hollnagel's zero-order phenotypes of erroneous action for omissions, jumps, repetitions, and intrusions. Multiple phenotypical acts can occur in sequence, thus allowing for the generation of higher order phenotypes. The task behavior model pattern capable of generating erroneous behavior can be integrated into a formal system model so that system safety properties can be formally verified with a model checker. This allows analysts to prove that a human–automation interactive system (as represented by the model) will or will not satisfy safety properties with both normative and generated erroneous human behavior. We present benchmarks related to the size of the statespace and verification time of models to show how the erroneous human behavior generation process scales. We demonstrate the method with a case study: the operation of a radiation therapy machine. A potential problem resulting from a generated erroneous human action is discovered. A design intervention is presented which prevents this problem from occurring. We discuss how our method could be used to evaluate larger applications and recommend future paths of development.  相似文献   

4.
Coined quantum walks (QWs) are being used in many contexts with the goal of understanding quantum systems and building quantum algorithms for quantum computers. Alternative models such as Szegedy’s and continuous-time QWs were proposed taking advantage of the fact that quantum theory seems to allow different quantized versions based on the same classical model, in this case the classical random walk. In this work, we show the conditions upon which coined QWs are equivalent to Szegedy’s QWs. Those QW models have in common a large class of instances, in the sense that the evolution operators are equal when we convert the graph on which the coined QW takes place into a bipartite graph on which Szegedy’s QW takes place, and vice versa. We also show that the abstract search algorithm using the coined QW model can be cast into Szegedy’s searching framework using bipartite graphs with sinks.  相似文献   

5.
We present a system that is able to autonomously build a 3D model of a robot’s hand, along with a kinematic model of the robot’s arm, beginning with very little information. The system starts by using exploratory motions to locate and centre the robot’s hand in the middle of its field of view, and then progressively builds the 3D and kinematic models. The system is flexible, and easy to integrate with different robots, because the model building process does not require any fiducial markers to be attached to the robot’s hand. To validate the models built by the system we perform a number of experiments. The results of the experiments demonstrate that the hand model built by the system can be tracked with a precision in the order of 1 mm, and that the kinematic model is accurate enough to reliably position the hand of the robot in camera space.  相似文献   

6.
The JavaFit program is a package for carrying out interactive nonlinear least-squares fitting to determine the parameters of physical models from experimental data. It has been conceived as a platform independent package aimed at the relatively modest computational needs of spectroscopists, where it is often necessary to determine physical parameters from a variety of spectral lineshape models. The program is platform independent, provided that a Java runtime module is available for the host platform. The program is also designed to read a wide variety of data in ASCII column formats produced on DOS, Macintosh and UNIX platforms.  相似文献   

7.
The problem of segmentation in spite of all the work over the last decades, is still an important research field and also a critical preprocessing step for image processing, mostly due to the fact that finding a global optimal threshold that works well for all kind of images is indeed a very difficult task that, probably, will never be accomplished.During the past years, fuzzy logic theory has been successfully applied to image thresholding. In this paper we describe a thresholding technique using Atanassov’s intuitionistic fuzzy sets (A-IFSs). This approach uses Atanassov’s intuitionistic index values for representing the hesitance of the expert in determining whether the pixel belongs to the background or that it belongs to the object. First, we describe the general framework of this approach to bi-level thresholding. Then we present its natural extension to multilevel thresholding. This multilevel threshold methodology segments the image into several distinct regions which correspond to a background and several objects.Segmentation experimental results and comparison with Otsu’s multilevel thresholding algorithm for the calculation of two and three thresholds are presented.  相似文献   

8.
Research on utilising social networks for teaching and learning is relatively scarce in the context of information systems. There is far more emphasis on studying the usage of social networks towards fulfilling individuals’ basic social needs. This study uses the unified theory of acceptance and use of technology (UTAUT2) to analyse students’ intention to use and use of e-learning via Facebook. It incorporates playfulness into the UTAUT2 model and categorises the determinants of intention to use e-learning via Facebook into three categories, namely, hedonic values, utilitarian values, and communication values. The data were collected in a two-stage survey from 170 undergraduate students, and the model was tested using structural equation modelling. We found that hedonic motivation, perceived playfulness, and performance expectancy were strong determinants of students’ intention to use e-learning, while habit and facilitating conditions all positively affected students’ use of e-learning via Facebook. The results of this study report new knowledge that academic institutions can utilise to create appropriate e-learning environments for teaching and learning. A number of theoretical and managerial implications for universities’ implementation technologies were also identified.  相似文献   

9.
The prediction of the imbibition into two-dimensional geometries is extremely important to develop new paper-based microfluidics design principles. In this regard, a two-dimensional model using Richard’s equation, which has been extensively applied in soil mechanics, is applied in this work to model the imbibition into paper-based networks. Compared to capillary-based models, the developed model is capable of predicting the imbibition into two-dimensional domains. The numerical solution of the proposed model shows a good agreement with the experimental measurements of water imbibition into different chromatography paper-based designs. It is expected that this framework can be applied to develop new design rules for controlling the flow in paper-based microfluidics devices.  相似文献   

10.
We develop a weak Galerkin (WG) finite element method for the Biot’s consolidation model in the classical displacement–pressure two-field formulation. Weak Galerkin linear finite elements are used for both displacement and pressure approximations in spatial discretizations. Backward Euler scheme is used for temporal discretization in order to obtain an implicit fully discretized scheme. We study the well-posedness of the linear system at each time step and also derive the overall optimal-order convergence of the WG formulation. Such WG scheme is designed on general shape regular polytopal meshes and provides stable and oscillation-free approximation for the pressure without special treatment. Numerical experiments are presented to demonstrate the efficiency and accuracy of the proposed weak Galerkin finite element method.  相似文献   

11.
Let (n) be the minimum number of arithmetic operations required to build the integer from the constants 1 and 2. A sequence xn is said to be easy to compute if there exists a polynomial p such that for all It is natural to conjecture that sequences such as or n! are not easy to compute. In this paper we show that a proof of this conjecture for the first sequence would imply a superpolynomial lower bound for the arithmetic circuit size of the permanent polynomial. For the second sequence, a proof would imply a superpolynomial lower bound for the permanent or P PSPACE.  相似文献   

12.

Forward privacy of RFID systems and its relaxed version, narrow forward privacy, are generally considered satisfactory for practical needs. Unfortunately, the attempt to get forward privacy by symmetric-key cryptography failed. Also, all symmetric-key cryptography-based RFID systems proposed so far that are strictly narrow forward private (that is, narrow forward private but not forward private) suffer from desynchronization. Under these circumstances, the question whether the attempt to exceed these limits is doomed to failure or not was frequently asked. The paper we are proposing wants to clarify this matter. Thus, we show that forward privacy in Vaudenay’s RFID model cannot be achieved with symmetric-key cryptography. Then, we show that strictly narrow forward privacy can be achieved with symmetric-key cryptography only by RFID systems with unbounded desynchronization. This last result holds for strictly narrow destructive privacy and strictly narrow strong privacy too, if one wants to achieve them with symmetric-key cryptography.

  相似文献   

13.
We show that under the matrix product state formalism the states produced in Shor’s algorithm can be represented using \(O(\max (4lr^2, 2^{2l}))\) space, where l is the number of bits in the number to factorise and r is the order and the solution to the related order-finding problem. The reduction in space compared to an amplitude formalism approach is significant, allowing simulations as large as 42 qubits to be run on a single processor with 32 GB RAM. This approach is readily adapted to a distributed memory environment, and we have simulated a 45-qubit case using 8 cores with 16 GB RAM in approximately 1 h.  相似文献   

14.
This paper addressees the problem of an early diagnosis of PD (Parkinson’s disease) by the classification of characteristic features of person’s voice knowing that 90% of the people with PD suffer from speech disorders. We collected 375 voice samples from healthy and people suffer from PD. We extracted from each voice sample features using the MFCC and PLP Cepstral techniques. All the features are analyzed and selected by feature selection algorithms to classify the subjects in 4 classes according to UPDRS (unified Parkinson’s disease Rating Scale) score. The advantage of our approach is the resulting and the simplicity of the technique used, so it could also extended for other voice pathologies. We used as classifier the discriminant analysis for the results obtained in previous multiclass classification works. We obtained accuracy up to 87.6% for discrimination between PD patients in 3 different stages and healthy control using MFCC along with the LLBFS algorithm.  相似文献   

15.
Multimedia Tools and Applications - In this paper, we propose a new enhanced absolute moment block truncation coding (AMBTC) image compression method based on interpolation. The proposed...  相似文献   

16.
While mechanistic models tend to be detailed, they are less detailed than the real systems they seek to describe, so judgements are being made about the appropriate level of detail within the process of model development. These judgements are difficult to test, consequently it is easy for models to become over-parameterised, potentially increasing uncertainty in predictions. The work we describe is a step towards addressing these difficulties. We propose and implement a method which explores a family of simpler models obtained by replacing model variables with constants (model reduction by variable replacement). The procedure iteratively searches the simpler model formulations and compares models in terms of their ability to predict observed data, evaluated within a Bayesian framework. The results can be summarised as posterior model probabilities and replacement probabilities for individual variables which lend themselves to mechanistic interpretation. This provides powerful diagnostic information to support model development, and can identify areas of model over-parameterisation with implications for interpretation of model results. We present the application of the method to 3 example models. In each case reduced models are identified which outperform the original full model in terms of comparisons to observations, suggesting some over-parameterisation has occurred during model development. We argue that the proposed approach is relevant to anyone involved in the development or use of process based mathematical models, especially those where understanding is encoded via empirically based relationships.  相似文献   

17.
This paper analyzes the application of Moran’s index and Geary’s coefficient to the characterization of lung nodules as malignant or benign in computerized tomography images. The characterization method is based on a process that verifies which combination of measures, from the proposed measures, has been best able to discriminate between the benign and malignant nodules using stepwise discriminant analysis. Then, a linear discriminant analysis procedure was performed using the selected features to evaluate the ability of these in predicting the classification for each nodule. In order to verify this application we also describe tests that were carried out using a sample of 36 nodules: 29 benign and 7 malignant. A leave-one-out procedure was used to provide a less biased estimate of the linear discriminator’s performance. The two analyzed functions and its combinations have provided above 90% of accuracy and a value area under receiver operation characteristic (ROC) curve above 0.85, that indicates a promising potential to be used as nodules signature measures. The preliminary results of this approach are very encouraging in characterizing nodules using the two functions presented.
Rodolfo Acatauassu NunesEmail:
  相似文献   

18.
This study investigates the temporal sampling error on the Earth’s outgoing radiation, with a potential Earth observation platform, the Moon-based platform. To simulate the Earth’s outgoing radiation viewed from a Moon-based platform, we used the datasets of the NASA’s Goddard Earth Observing System Version 5 (GEOS-5) systems as the truth. The analysis is proposed by sampling the simulated time series. The sampling uncertainty associated with a given sampling interval is measured by computing the Root-Mean-Square-Error (RMSE) of the original and subsampled time series. The effects of different sampling intervals are evaluated by maximum bias. The effects of different sampling time are estimated by comparing correlations between the subsampled time series of different start time at specific temporal interval and the origin. The results show that the temporal sampling errors have the characteristics of the periodical uncertainties and bias, and 4-h sampling interval is a turning point. The sampling interval larger than 4 h will result in large uncertainties and bias.  相似文献   

19.

As a particular case study of the formal verification of state-of-the-art, real software, we discuss the specification and verification of a corrected version of the implementation of a linked list as provided by the Java Collection Framework.

  相似文献   

20.
In this paper we describe a new model suitable for optimization problems with explicitly unknown optimization functions using user’s preferences. The model addresses an ability to learn not known optimization functions thus perform also a learning of user’s preferences. The model consists of neural networks using fuzzy membership functions and interactive evolutionary algorithms in the process of learning. Fuzzy membership functions of basic human values and their priorities were prepared by utilizing Schwartz’s model of basic human values (achievement, benevolence, conformity, hedonism, power, security, self-direction, stimulation, tradition and universalism). The quality of the model was tested on “the most attractive font face problem” and it was evaluated using the following criteria: a speed of optimal parameters computation, a precision of achieved results, Wilcoxon signed rank test and a similarity of letter images. The results qualify the developed model as very usable in user’s preference modeling.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号