首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 825 毫秒
1.
Modeling manufacturing processes assists the design of new systems, allowing predictions of future behaviors, identifying improvement areas and evaluating changes to existing systems. Probabilistic Boolean networks (PBN) have been used to study biological systems, since they combine uncertainty and rule-based representation. A novel approach is proposed to model the design of an automated manufacturing assembly processes using PBNs to generate quantitative data for occurrence assessment in design failure mode and effects analysis. FMEA is a widely used tool in risk assessment (RA) to ensure design outputs consistently deliver the intended level of performance. Effectiveness of RA depends upon the robustness of the data used. Temporal logic is applied to analyze state successions in a transition system, while interactions and dynamics are captured over a set of Boolean variables using PBNs. Designs are therefore enhanced through assessment of risks, using proposed tools in the early phases of design of manufacturing systems. A two-sample T test demonstrates the proposed model provides values closer to expected values; consequently modeling observable phenomena (\(p\,\hbox {value} > 0.05\)). Simulations are used to generate data required to conduct inferential statistical tests to determine the level of correspondence between model prediction and real machine data.  相似文献   

2.
G. Alefeld  Z. Wang 《Computing》2008,83(4):175-192
In this paper we consider the complementarity problem NCP(f) with f(x) = Mx + φ(x), where MR n×n is a real matrix and φ is a so-called tridiagonal (nonlinear) mapping. This problem occurs, for example, if certain classes of free boundary problems are discretized. We compute error bounds for approximations \({\hat x}\) to a solution x* of the discretized problems. The error bounds are improved by an iterative method and can be made arbitrarily small. The ideas are illustrated by numerical experiments.  相似文献   

3.
In this paper, we mainly focus on two issues (1) SVM is very sensitive to noise. (2) The solution of SVM does not take into consideration of the intrinsic structure and the discriminant information of the data. To address these two problems, we first propose an integration model to integrate both the local manifold structure and the local discriminant information into ?1 graph embedding. Then we add the integration model into the objection function of υ-support vector machine. Therefore, a discriminant sparse neighborhood preserving embedding υ-support vector machine (υ-DSNPESVM) method is proposed. The theoretical analysis demonstrates that υ-DSNPESVM is a reasonable maximum margin classifier and can obtain a very lower generalization error upper bound by minimizing the integration model and the upper bound of margin error. Moreover, in the nonlinear case, we construct the kernel sparse representation-based ?1 graph for υ-DSNPESVM, which is more conducive to improve the classification accuracy than ?1 graph constructed in the original space. Experimental results on real datasets show the effectiveness of the proposed υ-DSNPESVM method.  相似文献   

4.
A flow-shop batching problem with consistent batches is considered in which the processing times of all jobs on each machine are equal to p and all batch set-up times are equal to s. In such a problem, one has to partition the set of jobs into batches and to schedule the batches on each machine. The processing time of a batch B i is the sum of processing times of operations in B i and the earliest start of B i on a machine is the finishing time of B i on the previous machine plus the set-up time s. Cheng et al. (Naval Research Logistics 47:128–144, 2000) provided an O(n) pseudopolynomial-time algorithm for solving the special case of the problem with two machines. Mosheiov and Oron (European Journal of Operational Research 161:285–291, 2005) developed an algorithm of the same time complexity for the general case with more than two machines. Ng and Kovalyov (Journal of Scheduling 10:353–364, 2007) improved the pseudopolynomial complexity to \(O(\sqrt{n})\). In this paper, we provide a polynomial-time algorithm of time complexity O(log?3 n).  相似文献   

5.
Hatem M. Bahig 《Computing》2011,91(4):335-352
An addition chain for a natural number n is a sequence \({1=a_0 < a_1 < \cdots < a_r=n}\) of numbers such that for each 0 < i ≤ r, a i  = a j  + a k for some 0 ≤ k ≤ j < i. The minimal length of an addition chain for n is denoted by ?(n). If j = i ? 1, then step i is called a star step. We show that there is a minimal length addition chain for n such that the last four steps are stars. Then we conjecture that there is a minimal length addition chain for n such that the last \({\lfloor\frac{\ell(n)}{2}\rfloor}\)-steps are stars. We verify that the conjecture is true for all numbers up to 218. An application of the result and the conjecture to generate a minimal length addition chain reduce the average CPU time by 23–29% and 38–58% respectively, and memory storage by 16–18% and 26–45% respectively for m-bit numbers with 14 ≤ m ≤ 22.  相似文献   

6.
Support vector machines are arguably one of the most successful methods for data classification, but when using them in regression problems, literature suggests that their performance is no longer state-of-the-art. This paper compares performances of three machine learning methods for the prediction of independent output cutting parameters in a high speed turning process. Observed parameters were the surface roughness (Ra), cutting force \((F_{c})\), and tool lifetime (T). For the modelling, support vector regression (SVR), polynomial (quadratic) regression, and artificial neural network (ANN) were used. In this research, polynomial regression has outperformed SVR and ANN in the case of \(F_{c}\) and Ra prediction, while ANN had the best performance in the case of T, but also the worst performance in the case of \(F_{c}\) and Ra. The study has also shown that in SVR, the polynomial kernel has outperformed linear kernel and RBF kernel. In addition, there was no significant difference in performance between SVR and polynomial regression for prediction of all three output machining parameters.  相似文献   

7.
Design of rectangular concrete-filled steel tubular (CFT) columns has been a big concern owing to their complex constraint mechanism. Generally, most existing methods are based on simplified mechanical model with limited experimental data, which is not reliable under many conditions, e.g., columns using high strength materials. Artificial neural network (ANN) models have shown the effectiveness to solve complex problems in many areas of civil engineering in recent years. In this paper, ANN models were employed to predict the axial bearing capacity of rectangular CFT columns based on the experimental data. 305 experimental data from articles were collected, and 275 experimental samples were chosen to train the ANN models while 30 experimental samples were used for testing. Based on the comparison among different models, artificial neural network model1 (ANN1) and artificial neural network model2 (ANN2) with a 20-neuron hidden layer were chosen as the fit prediction models. ANN1 has five inputs: the length (D) and width (B) of cross section, the thickness of steel (t), the yield strength of steel (f y), the cylinder strength of concrete (fc). ANN2 has ten inputs: D, B, t, f y, fc, the length to width ratio (D/B), the length to thickness ratio (D/t), the width to thickness ratio (B/t), restraint coefficient (ξ), the steel ratio (α). The axial bearing capacity is the output data for both models.The outputs from ANN1 and ANN2 were verified and compared with those from EC4, ACI, GJB4142 and AISC360-10. The results show that the implemented models have good prediction and generalization capacity. Parametric study was conducted using ANN1 and ANN2 which indicates that effect law of basic parameters of columns on the axial bearing capacity of rectangular CFT columns differs from design codes.The results also provide convincing design reference to rectangular CFT columns.  相似文献   

8.
We prove that any balanced incomplete block design B(v, k, 1) generates a nearresolvable balanced incomplete block design NRB(v, k ? 1, k ? 2). We establish a one-to-one correspondence between near-resolvable block designs NRB(v, k ?1, k ?2) and the subclass of nonbinary (optimal, equidistant) constant-weight codes meeting the generalized Johnson bound.  相似文献   

9.
Passive asymmetric breakups of a droplet could be done in many microchannels of various geometries. In order to study the effects of different geometries on the asymmetric breakup of a droplet, four types of asymmetric microchannels with the topological equivalence of geometry are designed, which are T-90, Y-120, Y-150, and I-180 microchannels. A three-dimensional volume of fluid multiphase model is employed to investigate the asymmetric rheological behaviors of a droplet numerically. Three regimes of rheological behaviors as a function of the capillary numbers Ca and the asymmetries As defined by As = (b1 ? b2)/(b1 + b2) (where b1 and b2 are the widths of two asymmetric sidearms) have been observed. A power law model based on three major factors (Ca, As and the initial volume ratio r 0) is employed to describe the volume ratio of two daughter droplets. The analysis of pressure fields shows that the pressure gradient inside the droplet is one of the major factors causing the droplet translation during its asymmetric breakup. Besides the above similarities among various microchannels, the asymmetric breakup in them also have some slight differences as various geometries have different enhancement or constraint effects on the translation of the droplet and the cutting action of flows. It is disclosed that I-180 microchannel has the smallest critical capillary number, the shortest splitting time, and is hardest to generate satellite droplets.  相似文献   

10.
To facilitate developers in effective allocation of their testing and debugging efforts, many software defect prediction techniques have been proposed in the literature. These techniques can be used to predict classes that are more likely to be buggy based on the past history of classes, methods, or certain other code elements. These techniques are effective provided that a sufficient amount of data is available to train a prediction model. However, sufficient training data are rarely available for new software projects. To resolve this problem, cross-project defect prediction, which transfers a prediction model trained using data from one project to another, was proposed and is regarded as a new challenge in the area of defect prediction. Thus far, only a few cross-project defect prediction techniques have been proposed. To advance the state of the art, in this study, we investigated seven composite algorithms that integrate multiple machine learning classifiers to improve cross-project defect prediction. To evaluate the performance of the composite algorithms, we performed experiments on 10 open-source software systems from the PROMISE repository, which contain a total of 5,305 instances labeled as defective or clean. We compared the composite algorithms with the combined defect predictor where logistic regression is used as the meta classification algorithm (CODEP Logistic ), which is the most recent cross-project defect prediction algorithm in terms of two standard evaluation metrics: cost effectiveness and F-measure. Our experimental results show that several algorithms outperform CODEP Logistic : Maximum voting shows the best performance in terms of F-measure and its average F-measure is superior to that of CODEP Logistic by 36.88%. Bootstrap aggregation (BaggingJ48) shows the best performance in terms of cost effectiveness and its average cost effectiveness is superior to that of CODEP Logistic by 15.34%.  相似文献   

11.
In several real-world node label prediction problems on graphs, in fields ranging from computational biology to World Wide Web analysis, nodes can be partitioned into categories different from the classes to be predicted, on the basis of their characteristics or their common properties. Such partitions may provide further information about node classification that classical machine learning algorithms do not take into account. We introduce a novel family of parametric Hopfield networks (m-category Hopfield networks) and a novel algorithm (Hopfield multi-categoryHoMCat), designed to appropriately exploit the presence of property-based partitions of nodes into multiple categories. Moreover, the proposed model adopts a cost-sensitive learning strategy to prevent the remarkable decay in performance usually observed when instance labels are unbalanced, that is, when one class of labels is highly underrepresented than the other one. We validate the proposed model on both synthetic and real-world data, in the context of multi-species function prediction, where the classes to be predicted are the Gene Ontology terms and the categories the different species in the multi-species protein network. We carried out an intensive experimental validation, which on the one hand compares HoMCat with several state-of-the-art graph-based algorithms, and on the other hand reveals that exploiting meaningful prior partitions of input data can substantially improve classification performances.  相似文献   

12.
Learning from data that are too big to fit into memory poses great challenges to currently available learning approaches. Averaged n-Dependence Estimators (AnDE) allows for a flexible learning from out-of-core data, by varying the value of n (number of super parents). Hence, AnDE is especially appropriate for learning from large quantities of data. Memory requirement in AnDE, however, increases combinatorially with the number of attributes and the parameter n. In large data learning, number of attributes is often large and we also expect high n to achieve low-bias classification. In order to achieve the lower bias of AnDE with higher n but with less memory requirement, we propose a memory constrained selective AnDE algorithm, in which two passes of learning through training examples are involved. The first pass performs attribute selection on super parents according to available memory, whereas the second one learns an AnDE model with parents only on the selected attributes. Extensive experiments show that the new selective AnDE has considerably lower bias and prediction error relative to A\(n'\)DE, where \(n' = n-1\), while maintaining the same space complexity and similar time complexity. The proposed algorithm works well on categorical data. Numerical data sets need to be discretized first.  相似文献   

13.
Accelerating Turing machines have attracted much attention in the last decade or so. They have been described as “the work-horse of hypercomputation” (Potgieter and Rosinger 2010: 853). But do they really compute beyond the “Turing limit”—e.g., compute the halting function? We argue that the answer depends on what you mean by an accelerating Turing machine, on what you mean by computation, and even on what you mean by a Turing machine. We show first that in the current literature the term “accelerating Turing machine” is used to refer to two very different species of accelerating machine, which we call end-stage-in and end-stage-out machines, respectively. We argue that end-stage-in accelerating machines are not Turing machines at all. We then present two differing conceptions of computation, the internal and the external, and introduce the notion of an epistemic embedding of a computation. We argue that no accelerating Turing machine computes the halting function in the internal sense. Finally, we distinguish between two very different conceptions of the Turing machine, the purist conception and the realist conception; and we argue that Turing himself was no subscriber to the purist conception. We conclude that under the realist conception, but not under the purist conception, an accelerating Turing machine is able to compute the halting function in the external sense. We adopt a relatively informal approach throughout, since we take the key issues to be philosophical rather than mathematical.  相似文献   

14.
Although many data hiding schemes have been proposed in the frequency domain, the tradeoff between hiding capacity and image quality is still an existing problem to be solved. In this paper, we proposed a novel reversible data hiding scheme based on the Haar discrete wavelet transform (DWT) and interleaving-prediction method. First, a one-level Haar discrete wavelet transform (DWT) is implemented to the cover image, and four sub-bands, LL?,??HL?,??LH and?HH, are obtained. Sub-bands HL, LH??and?HH are chosen for embedding. After that, the wavelet coefficients of the chosen sub-bands are zig-zag scanned and two adjacent coefficients are used for prediction. The secret data is embedded in the prediction errors, which is the difference between the original value and the predicted value of the wavelet coefficients. The experimental results showed that our scheme has good performance compared with other existing reversible data hiding schemes.  相似文献   

15.
A k-core of a graph is a maximal connected subgraph in which every vertex is connected to at least k vertices in the subgraph. k-core decomposition is often used in large-scale network analysis, such as community detection, protein function prediction, visualization, and solving NP-hard problems on real networks efficiently, like maximal clique finding. In many real-world applications, networks change over time. As a result, it is essential to develop efficient incremental algorithms for dynamic graph data. In this paper, we propose a suite of incremental k-core decomposition algorithms for dynamic graph data. These algorithms locate a small subgraph that is guaranteed to contain the list of vertices whose maximum k-core values have changed and efficiently process this subgraph to update the k-core decomposition. We present incremental algorithms for both insertion and deletion operations, and propose auxiliary vertex state maintenance techniques that can further accelerate these operations. Our results show a significant reduction in runtime compared to non-incremental alternatives. We illustrate the efficiency of our algorithms on different types of real and synthetic graphs, at varying scales. For a graph of 16 million vertices, we observe relative throughputs reaching a million times, relative to the non-incremental algorithms.  相似文献   

16.
Recently, researches on smart phones have received attentions because the wide potential applications. One of interesting and useful topic is mining and predicting the users’ mobile application (App) usage behaviors. With more and more Apps installed in users’ smart phone, the users may spend much time to find the Apps they want to use by swiping the screen. App prediction systems benefit for reducing search time and launching time since the Apps which may be launched can preload in the memory before they are actually used. Although some previous studies had been proposed on the problem of App usage analysis, they recommend Apps for users only based on the frequencies of App usages. We consider that the relationship between App usage demands and users’ recent spatial and temporal behaviors may be strong. In this paper, we propose Spatial and Temporal App Recommender (STAR), a novel framework to predict and recommend the Apps for mobile users under a smart phone environment. The STAR framework consists of four major modules. We first find the meaningful and semantic location movements from the geographic GPS trajectory data by the Spatial Relation Mining Module and generate the suitable temporal segments by the Temporal Relation Mining Module. Then, we design Spatial and Temporal App Usage Pattern Mine (STAUP-Mine) algorithm to efficiently discover mobile users’ Spatial and Temporal App Usage Patterns (STAUPs). Furthermore, an App Usage Demand Prediction Module is presented to predict the following App usage demands according to the discovered STAUPs and spatial/temporal relations. To our knowledge, this is the first study to simultaneously consider the spatial movements, temporal properties and App usage behavior for mining App usage pattern and demand prediction. Through rigorous experimental analysis from two real mobile App datasets, STAR framework delivers an excellent prediction performance.  相似文献   

17.
In negation-limited complexity, one considers circuits with a limited number of NOT gates, being motivated by the gap in our understanding of monotone versus general circuit complexity, and hoping to better understand the power of NOT gates. We give improved lower bounds for the size (the number of AND/OR/NOT) of negation-limited circuits computing Parity and for the size of negation-limited inverters. An inverter is a circuit with inputs x 1,…,x n and outputs ¬ x 1,…,¬ x n . We show that: (a) for n=2 r ?1, circuits computing Parity with r?1 NOT gates have size at least 6n?log?2(n+1)?O(1), and (b) for n=2 r ?1, inverters with r NOT gates have size at least 8n?log?2(n+1)?O(1). We derive our bounds above by considering the minimum size of a circuit with at most r NOT gates that computes Parity for sorted inputs x 1???x n . For an arbitrary r, we completely determine the minimum size. It is 2n?r?2 for odd n and 2n?r?1 for even n for ?log?2(n+1)??1≤rn/2, and it is ?3n/2??1 for rn/2. We also determine the minimum size of an inverter for sorted inputs with at most r NOT gates. It is 4n?3r for ?log?2(n+1)?≤rn. In particular, the negation-limited inverter for sorted inputs due to Fischer, which is a core component in all the known constructions of negation-limited inverters, is shown to have the minimum possible size. Our fairly simple lower bound proofs use gate elimination arguments in a somewhat novel way.  相似文献   

18.
This paper proposes an orthogonal analysis method for decoupling the multiple nozzle geometrical parameters of microthrusters, thus an reconfigured design can be implemented to generate a proper thrust. In this method, the effects of various nozzle geometrical parameters, including throat width W t , half convergence angle θ in , half divergence angle θ out , exit-to-throat section ratio W e /W t and throat radius of the curvature R t /W t , on the performance of microthrusters are sorted by range analysis. Analysis results show that throat width seriously affects thrust because range value of 67.53 mN is extremely larger than the range value of other geometry parameters. For average specific impulse (ASI), the range value of exit-to-throat section ratio W e /W t and half divergence angle θ out are 4.82 s and 3.72 s, respectively. Half convergence angle with the range value of 0.39 s and throat radius with 0.32 s have less influence on ASI compared with exit-to-throat section ratio and half divergence angle. When increasing the half convergence angle from 10° to 40° and throat radius of the curvature from 3 to 9, average specific impulse initially decreases and then increases. A MEMS solid propellant thruster (MSPT) with the reconfigured geometrical parameters of nozzle is fabricated to verify the feasibility of the proposed method. The thrust of the microthruster can reach 25 mN. Power is estimated to be 0.84 W. This work provides design guideline to reasonably configure geometry parameters of microthruster.  相似文献   

19.
 We present a first study concerning the optimization of a non linear fuzzy function f depending both on a crisp variable and a fuzzy number: therefore the function value is a fuzzy number. More specifically, given a real fuzzy number ?∈F and the function f(a,x):R 2R, we consider the fuzzy extension induced by f, f˜ : F × R → F, f˜(?,x) = Y˜. If K is a convex subset of R, the problem we consider is “maximizing”f˜(?,x), xˉ∈ K. The first problem is the meaning of the word “maximizing”: in fact it is well-known that ranking fuzzy numbers is a complex matter. Following a general method, we introduce a real function (evaluation function) on real fuzzy numbers, in order to get a crisp rating, induced by the order of the real line. In such a way, the optimization problem on fuzzy numbers can be written in terms of an optimization problem for the real-valued function obtained by composition of f with a suitable evaluation function. This approach allows us to state a necessary and sufficient condition in order that ∈K is the maximum for f˜ in K, when f(a,x) is convex-concave (Theorem 4.1).  相似文献   

20.
We introduce m-near-resolvable block designs. We establish a correspondence between such block designs and a subclass of (optimal equidistant) q-ary constant-weight codes meeting the Johnson bound. We present constructions of m-near-resolvable block designs, in particular based on Steiner systems and super-simple t-designs.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号