首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Diamond drilling has been widely used in the different civil engineering projects. The prediction of penetration rate in the drilling is especially useful for the feasibility studies. In this study, the predictability of penetration rate for the diamond drilling was investigated from the operational variables and the rock properties such as the uniaxial compressive strength, the tensile strength and the relative abrasiveness. Both the multiple regression and the artificial neural networks (ANN) analysis were used in the study. Very good models were derived from ANN analysis for the prediction of penetration rate. The comparison of ANN models with the regression models indicated that ANN models were much more reliable than the regression models. It is concluded that the penetration rate for the diamond drilling can be reliably estimated from the uniaxial compressive strength, the tensile strength and the relative abrasiveness using the ANN models.  相似文献   

2.
Uniaxial compressive strength (UCS) of rock is crucial for any type of projects constructed in/on rock mass. The test that is conducted to measure the UCS of rock is expensive, time consuming and having sample restriction. For this reason, the UCS of rock may be estimated using simple rock tests such as point load index (I s(50)), Schmidt hammer (R n) and p-wave velocity (V p) tests. To estimate the UCS of granitic rock as a function of relevant rock properties like R n, p-wave and I s(50), the rock cores were collected from the face of the Pahang–Selangor fresh water tunnel in Malaysia. Afterwards, 124 samples are prepared and tested in accordance with relevant standards and the dataset is obtained. Further an established dataset is used for estimating the UCS of rock via three-nonlinear prediction tools, namely non-linear multiple regression (NLMR), artificial neural network (ANN) and adaptive neuro-fuzzy inference system (ANFIS). After conducting the mentioned models, considering several performance indices including coefficient of determination (R 2), variance account for and root mean squared error and also using simple ranking procedure, the models were examined and the best prediction model was selected. It is concluded that the R 2 equal to 0.951 for testing dataset suggests the superiority of the ANFIS model, while these values are 0.651 and 0.886 for NLMR and ANN techniques, respectively. The results pointed out that the ANFIS model can be used for predicting UCS of rocks with higher capacity in comparison with others. However, the developed model may be useful at a preliminary stage of design; it should be used with caution and only for the specified rock types.  相似文献   

3.
Design of rectangular concrete-filled steel tubular (CFT) columns has been a big concern owing to their complex constraint mechanism. Generally, most existing methods are based on simplified mechanical model with limited experimental data, which is not reliable under many conditions, e.g., columns using high strength materials. Artificial neural network (ANN) models have shown the effectiveness to solve complex problems in many areas of civil engineering in recent years. In this paper, ANN models were employed to predict the axial bearing capacity of rectangular CFT columns based on the experimental data. 305 experimental data from articles were collected, and 275 experimental samples were chosen to train the ANN models while 30 experimental samples were used for testing. Based on the comparison among different models, artificial neural network model1 (ANN1) and artificial neural network model2 (ANN2) with a 20-neuron hidden layer were chosen as the fit prediction models. ANN1 has five inputs: the length (D) and width (B) of cross section, the thickness of steel (t), the yield strength of steel (f y), the cylinder strength of concrete (fc). ANN2 has ten inputs: D, B, t, f y, fc, the length to width ratio (D/B), the length to thickness ratio (D/t), the width to thickness ratio (B/t), restraint coefficient (ξ), the steel ratio (α). The axial bearing capacity is the output data for both models.The outputs from ANN1 and ANN2 were verified and compared with those from EC4, ACI, GJB4142 and AISC360-10. The results show that the implemented models have good prediction and generalization capacity. Parametric study was conducted using ANN1 and ANN2 which indicates that effect law of basic parameters of columns on the axial bearing capacity of rectangular CFT columns differs from design codes.The results also provide convincing design reference to rectangular CFT columns.  相似文献   

4.
Unsupervised technique like clustering may be used for software cost estimation in situations where parametric models are difficult to develop. This paper presents a software cost estimation model based on a modified K-Modes clustering algorithm. The aims of this paper are: first, the modified K-Modes clustering which is an enhancement over the simple K-Modes algorithm using a proper dissimilarity measure for mixed data types, is presented and second, the proposed K-Modes algorithm is applied for software cost estimation. We have compared our modified K-Modes algorithm with existing algorithms on different software cost estimation datasets, and results showed the effectiveness of our proposed algorithm.  相似文献   

5.
Complex software development projects rely on the contribution of teams of developers, who are required to collaborate and coordinate their efforts. The productivity of such development teams, i.e., how their size is related to the produced output, is an important consideration for project and schedule management as well as for cost estimation. The majority of studies in empirical software engineering suggest that - due to coordination overhead - teams of collaborating developers become less productive as they grow in size. This phenomenon is commonly paraphrased as Brooks’ law of software project management, which states that “adding manpower to a software project makes it later”. Outside software engineering, the non-additive scaling of productivity in teams is often referred to as the Ringelmann effect, which is studied extensively in social psychology and organizational theory. Conversely, a recent study suggested that in Open Source Software (OSS) projects, the productivity of developers increases as the team grows in size. Attributing it to collective synergetic effects, this surprising finding was linked to the Aristotelian quote that “the whole is more than the sum of its parts”. Using a data set of 58 OSS projects with more than 580,000 commits contributed by more than 30,000 developers, in this article we provide a large-scale analysis of the relation between size and productivity of software development teams. Our findings confirm the negative relation between team size and productivity previously suggested by empirical software engineering research, thus providing quantitative evidence for the presence of a strong Ringelmann effect. Using fine-grained data on the association between developers and source code files, we investigate possible explanations for the observed relations between team size and productivity. In particular, we take a network perspective on developer-code associations in software development teams and show that the magnitude of the decrease in productivity is likely to be related to the growth dynamics of co-editing networks which can be interpreted as a first-order approximation of coordination requirements.  相似文献   

6.
In this study, the applicability of Artificial Neural Networks (ANNs) has been investigated for predicting the performance and emission characteristics of a diesel engine fuelled with Waste cooking oil (WCO). ANN modeling was done using multilayer perception (MLP) and radial basis functions (RBF). In the radial basis functions, centers were initialized by two different methods namely random selection method and using clustering algorithm. In the clustering method, center initialization was done using FCM (Fuzzy \(c\) means) and CDWFCM (cluster dependent weighted fuzzy \(c\) means) algorithms. The networks were trained using the experimental data, wherein load percentage, compression ratio, blend percentage, injection timing and injection pressure were taken as the input parameters and brake thermal efficiency, brake specific energy consumption, exhaust gas temperature and engine emissions were used as the output parameters. The investigation showed that ANN predicted results matched well with the experimental results over a wide range of operating conditions for both models. A comparison was made between ANN models and regression models. ANN performed better than the regression models. Similarly a comparison of MLP and RBF indicated that RBF with CDWFCM performed better than MLP networks with lower Mean Relative Error (MRE) and higher accuracy of prediction.  相似文献   

7.
Software development guidelines are a set of rules which can help improve the quality of software. These rules are defined on the basis of experience gained by the software development community over time. This paper discusses a set of design guidelines for model-based development of complex real-time embedded software systems. To be precise, we propose nine design conventions, three design patterns and thirteen antipatterns for developing UML-RT models. These guidelines have been identified based on our analysis of around 100 UML-RT models from industry and academia. Most of the guidelines are explained with the help of examples, and standard templates from the current state of the art are used for documenting the design rules.  相似文献   

8.
To facilitate developers in effective allocation of their testing and debugging efforts, many software defect prediction techniques have been proposed in the literature. These techniques can be used to predict classes that are more likely to be buggy based on the past history of classes, methods, or certain other code elements. These techniques are effective provided that a sufficient amount of data is available to train a prediction model. However, sufficient training data are rarely available for new software projects. To resolve this problem, cross-project defect prediction, which transfers a prediction model trained using data from one project to another, was proposed and is regarded as a new challenge in the area of defect prediction. Thus far, only a few cross-project defect prediction techniques have been proposed. To advance the state of the art, in this study, we investigated seven composite algorithms that integrate multiple machine learning classifiers to improve cross-project defect prediction. To evaluate the performance of the composite algorithms, we performed experiments on 10 open-source software systems from the PROMISE repository, which contain a total of 5,305 instances labeled as defective or clean. We compared the composite algorithms with the combined defect predictor where logistic regression is used as the meta classification algorithm (CODEP Logistic ), which is the most recent cross-project defect prediction algorithm in terms of two standard evaluation metrics: cost effectiveness and F-measure. Our experimental results show that several algorithms outperform CODEP Logistic : Maximum voting shows the best performance in terms of F-measure and its average F-measure is superior to that of CODEP Logistic by 36.88%. Bootstrap aggregation (BaggingJ48) shows the best performance in terms of cost effectiveness and its average cost effectiveness is superior to that of CODEP Logistic by 15.34%.  相似文献   

9.
To date most research in software effort estimation has not taken chronology into account when selecting projects for training and validation sets. A chronological split represents the use of a project’s starting and completion dates, such that any model that estimates effort for a new project p only uses as its training set projects that have been completed prior to p’s starting date. A study in 2009 (“S3”) investigated the use of chronological split taking into account a project’s age. The research question investigated was whether the use of a training set containing only the most recent past projects (a “moving window” of recent projects) would lead to more accurate estimates when compared to using the entire history of past projects completed prior to the starting date of a new project. S3 found that moving windows could improve the accuracy of estimates. The study described herein replicates S3 using three different and independent data sets. Estimation models were built using regression, and accuracy was measured using absolute residuals. The results contradict S3, as they do not show any gain in estimation accuracy when using windows for effort estimation. This is a surprising result: the intuition that recent data should be more helpful than old data for effort estimation is not supported. Several factors, which are discussed in this paper, might have contributed to such contradicting results. Some of our future work entails replicating this work using other datasets, to understand better when using windows is a suitable choice for software companies.  相似文献   

10.
This paper proposes a cross recovery scheme to protect a group of 3D models. The lost or damaged models can be reconstructed using the mutual support of the survived authenticated models. In the encoding phase, we convert a group of n given models (called host models) into n stego* models. The n stego* models would still preserve the appearance of the n host models. In the decoding phase, we divide the received models into two groups: authenticated vs. non-authenticated. Then, we rebuild the recovered models of the non-authenticated group by the mutual support of any t authenticated models (t < n is a given parameter). The experimental results show that the visual quality of our stego* models is very similar to that of the host models, and the size of the stego* models is also very similar to that of the host models. Moreover, after hacker’s attack or disk crash, if the number of attacked or crashed stego* models is not larger than n ? t, then the damaged or lost models can be recovered, and the recovered models still have acceptable quality. We also provide an equation which can estimate the suitable size of the recovered model in advance.  相似文献   

11.
Cellular Learning Automata (CLAs) are hybrid models obtained from combination of Cellular Automata (CAs) and Learning Automata (LAs). These models can be either open or closed. In closed CLAs, the states of neighboring cells of each cell called local environment affect on the action selection process of the LA of that cell whereas in open CLAs, each cell, in addition to its local environment has an exclusive environment which is observed by the cell only and the global environment which can be observed by all the cells in CLA. In dynamic models of CLAs, one of their aspects such as structure, local rule or neighborhood radius may change during the evolution of the CLA. CLAs can also be classified as synchronous CLAs or asynchronous CLAs. In a synchronous CLA, all LAs in different cells are activated synchronously whereas in an asynchronous CLA, the LAs in different cells are activated asynchronously. In this paper, a new closed asynchronous dynamic model of CLA whose structure and the number of LAs in each cell may vary with time has been introduced. To show the potential of the proposed model, a landmark clustering algorithm for solving topology mismatch problem in unstructured peer-to-peer networks has been proposed. To evaluate the proposed algorithm, computer simulations have been conducted and then the results are compared with the results obtained for two existing algorithms for solving topology mismatch problem. It has been shown that the proposed algorithm is superior to the existing algorithms with respect to communication delay and average round-trip time between peers within clusters.  相似文献   

12.
As software systems continue to play an important role in our daily lives, their quality is of paramount importance. Therefore, a plethora of prior research has focused on predicting components of software that are defect-prone. One aspect of this research focuses on predicting software changes that are fix-inducing. Although the prior research on fix-inducing changes has many advantages in terms of highly accurate results, it has one main drawback: It gives the same level of impact to all fix-inducing changes. We argue that treating all fix-inducing changes the same is not ideal, since a small typo in a change is easier to address by a developer than a thread synchronization issue. Therefore, in this paper, we study high impact fix-inducing changes (HIFCs). Since the impact of a change can be measured in different ways, we first propose a measure of impact of the fix-inducing changes, which takes into account the implementation work that needs to be done by developers in later (fixing) changes. Our measure of impact for a fix-inducing change uses the amount of churn, the number of files and the number of subsystems modified by developers during an associated fix of the fix-inducing change. We perform our study using six large open source projects to build specialized models that identify HIFCs, determine the best indicators of HIFCs and examine the benefits of prioritizing HIFCs. Using change factors, we are able to predict 56 % to 77 % of HIFCs with an average false alarm (misclassification) rate of 16 %. We find that the lines of code added, the number of developers who worked on a change, and the number of prior modifications on the files modified during a change are the best indicators of HIFCs. Lastly, we observe that a specialized model for HIFCs can provide inspection effort savings of 4 % over the state-of-the-art models. We believe our results would help practitioners prioritize their efforts towards the most impactful fix-inducing changes and save inspection effort.  相似文献   

13.
Consideration was given to the classical NP-hard problem 1|rj|Lmax of the scheduling theory. An algorithm to determine the optimal schedule of processing n jobs where the job parameters satisfy a system of linear constraints was presented. The polynomially solvable area of the problem 1|rj|Lmax was expanded. An algorithm was described to construct a Pareto-optimal set of schedules by the criteria Lmax and Cmax for complexity of O(n3logn) operations.  相似文献   

14.
Data-trained predictive models see widespread use, but for the most part they are used as black boxes which output a prediction or score. It is therefore hard to acquire a deeper understanding of model behavior and in particular how different features influence the model prediction. This is important when interpreting the behavior of complex models or asserting that certain problematic attributes (such as race or gender) are not unduly influencing decisions. In this paper, we present a technique for auditing black-box models, which lets us study the extent to which existing models take advantage of particular features in the data set, without knowing how the models work. Our work focuses on the problem of indirect influence: how some features might indirectly influence outcomes via other, related features. As a result, we can find attribute influences even in cases where, upon further direct examination of the model, the attribute is not referred to by the model at all. Our approach does not require the black-box model to be retrained. This is important if, for example, the model is only accessible via an API, and contrasts our work with other methods that investigate feature influence such as feature selection. We present experimental evidence for the effectiveness of our procedure using a variety of publicly available data sets and models. We also validate our procedure using techniques from interpretable learning and feature selection, as well as against other black-box auditing procedures. To further demonstrate the effectiveness of this technique, we use it to audit a black-box recidivism prediction algorithm.  相似文献   

15.
Suppose we have a parallel or distributed system whose nodes have limited capacities, such as processing speed, bandwidth, memory, or disk space. How does the performance of the system depend on the amount of heterogeneity of its capacity distribution? We propose a general framework to quantify the worst-case effect of increasing heterogeneity in models of parallel systems. Given a cost function g(C,W) representing the system’s performance as a function of its nodes’ capacities C and workload W (such as the makespan of an optimum schedule of jobs W on machines C), we say that g has price of heterogeneity α when for any workload, cost cannot increase by more than a factor α if node capacities become arbitrarily more heterogeneous. The price of heterogeneity also upper bounds the “value of parallelism”: the maximum benefit obtained by increasing parallelism at the expense of decreasing processor speed. We give constant or logarithmic bounds on the price of heterogeneity of several well-known job scheduling and graph degree/diameter problems, indicating that in many cases, increasing heterogeneity can never be much of a disadvantage.  相似文献   

16.
SGGS (Semantically-Guided Goal-Sensitive reasoning) is a clausal theorem-proving method, which generalizes to first-order logic the Davis-Putnam-Loveland-Logemann procedure with conflict-driven clause learning (DPLL-CDCL). SGGS starts from an initial interpretation, and works towards modifying it into a model of a given set of clauses, reporting unsatisfiability if there is no model. The state of the search for a model is described by a structure, called SGGS clause sequence. We present SGGS clause sequences as a formalism to represent models; and we prove their properties related to the mechanisms of SGGS for clausal propagation, conflict solving, and conflict-driven model repair at the first-order level.  相似文献   

17.
Single-file focusing and minimum interdistance of micron-size objects in a sample is a prerequisite for accurate flow cytometry measurements. Here, we report analytical models for predicting the focused width of a sample stream b as a function of channel aspect ratio α, sheath-to-sample flow rate ratio f and viscosity ratio λ in both 2D and 3D focusing. We present another analytical model to predict spacing between an adjacent pair of objects in a focused sample stream as a function of sample concentration C, mobility ? of the objects in the prefocused and postfocused regions and flow rate ratio f in both 2D and 3D flow focusing. Numerical simulations are performed using Ansys Fluent VOF model to predict the width of sample stream in 2D and 3D hydrodynamic focusing for different sample-to-sheath viscosity ratios, aspect ratios and flow rate ratios. Experiments are performed on both planar and three-dimensional devices fabricated in PDMS to demonstrate focusing of sample stream and spacing of polystyrene beads in the unfocused and focused stream at different sample concentrations C. The predictions of the analytical model and simulations are compared with experimental data, and a good match is found (within 12 %). Further, mobility of objects is experimentally studied in 2D and 3D focusing, and the spread of the mobility data is used as tool for the demonstration of particle focusing in flow cytometer applications.  相似文献   

18.
We consider the problem of estimating the noise level σ2 in a Gaussian linear model Y = +σξ, where ξ ∈ ?n is a standard discrete white Gaussian noise and β ∈ ?p an unknown nuisance vector. It is assumed that X is a known ill-conditioned n × p matrix with np and with large dimension p. In this situation the vector β is estimated with the help of spectral regularization of the maximum likelihood estimate, and the noise level estimate is computed with the help of adaptive (i.e., data-driven) normalization of the quadratic prediction error. For this estimate, we compute its concentration rate around the pseudo-estimate ||Y ? ||2/n.  相似文献   

19.
Recently, researches on smart phones have received attentions because the wide potential applications. One of interesting and useful topic is mining and predicting the users’ mobile application (App) usage behaviors. With more and more Apps installed in users’ smart phone, the users may spend much time to find the Apps they want to use by swiping the screen. App prediction systems benefit for reducing search time and launching time since the Apps which may be launched can preload in the memory before they are actually used. Although some previous studies had been proposed on the problem of App usage analysis, they recommend Apps for users only based on the frequencies of App usages. We consider that the relationship between App usage demands and users’ recent spatial and temporal behaviors may be strong. In this paper, we propose Spatial and Temporal App Recommender (STAR), a novel framework to predict and recommend the Apps for mobile users under a smart phone environment. The STAR framework consists of four major modules. We first find the meaningful and semantic location movements from the geographic GPS trajectory data by the Spatial Relation Mining Module and generate the suitable temporal segments by the Temporal Relation Mining Module. Then, we design Spatial and Temporal App Usage Pattern Mine (STAUP-Mine) algorithm to efficiently discover mobile users’ Spatial and Temporal App Usage Patterns (STAUPs). Furthermore, an App Usage Demand Prediction Module is presented to predict the following App usage demands according to the discovered STAUPs and spatial/temporal relations. To our knowledge, this is the first study to simultaneously consider the spatial movements, temporal properties and App usage behavior for mining App usage pattern and demand prediction. Through rigorous experimental analysis from two real mobile App datasets, STAR framework delivers an excellent prediction performance.  相似文献   

20.
The open source software (OSS) movement has become widely recognized as an effective way to deliver software. Even big software companies, well-known for being restrictive when it comes to publishing their source code artifacts, have recently adopted open source initiatives and released for general use the source code of some of their most notable products. We conducted an exploratory study on merits of the widespread belief that open-sourcing a proprietary software project will attract external developers, like casual contributors, and therefore improve software quality (e.g.,given enough eyeballs, all bugs are shallow”). By examining the pre- and post-migration software history of eight active, popular, non-trivial proprietary projects that became open source, we characterize the phenomenon and identify some challenges. Contrary to what many believe, we found that only a few projects experienced a growth in newcomers, contributions, and popularity; furthermore, this growth does not last long. The results from the study can be useful for helping software companies to better understand the hidden challenges of open-sourcing their software projects to attract external developers.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号