We investigate a variant of on-line edge-coloring in which there is a fixed number of colors available and the aim is to color as many edges as possible. We prove upper and lower bounds on the performance of different classes of algorithms for the problem. Moreover, we determine the performance of two specific algorithms, First-Fit and Next-Fit . Specifically, algorithms that never reject edges that they are able to color are called fair algorithms. We consider the four combinations of fair/ not fair and deterministic/ randomized. We show that the competitive ratio of deterministic fair algorithms can vary only between approximately 0.4641 and 1/2, and that Next-Fit is worst possible among fair algorithms. Moreover, we show that no algorithm is better than 4/7-competitive. If the graphs are all k -colorable, any fair algorithm is at least 1/2-competitive. Again, this performance is matched by Next-Fit while the competitive ratio for First-Fit is shown to be k/(2k-1) , which is significantly better, as long as k is not too large. 相似文献
In this paper we consider the optimization of general 3D truss structures. The design variables are the cross-sections of the truss bars together with the joint coordinates, and are considered to be continuous variables. Using these design variables we simultaneously carry out size optimization (areas) and shape optimization (joint positions). Topology optimization (removal and introduction of bars) is only considered in the sense that bars of minimum cross-sectional area will have a negligible influence on the performance of the structure. The structures are subjected to multiple load cases and the objective of the optimizations is minimum mass with constraints on (possibly multiple) eigenfrequencies, displacements, and stresses. For the case of stress constraints, we deal differently with tensile and compressive stresses, for which we control buckling on the element level. The stress constraints are imposed in correlation with industrial standards, to make the optimized designs valuable from a practical point of view. The optimization problem is solved using SLP (Sequential Linear Programming). 相似文献
When modeling a decision problem using the influence diagram framework, the quantitative part rests on two principal components: probabilities for representing the decision maker's uncertainty about the domain and utilities for representing preferences. Over the last decade, several methods have been developed for learning the probabilities from a database. However, methods for learning the utilities have only received limited attention in the computer science community.
A promising approach for learning a decision maker's utility function is to take outset in the decision maker's observed behavioral patterns, and then find a utility function which (together with a domain model) can explain this behavior. That is, it is assumed that decision maker's preferences are reflected in the behavior. Standard learning algorithms also assume that the decision maker is behavioral consistent, i.e., given a model of the decision problem, there exists a utility function which can account for all the observed behavior. Unfortunately, this assumption is rarely valid in real-world decision problems, and in these situations existing learning methods may only identify a trivial utility function. In this paper we relax this consistency assumption, and propose two algorithms for learning a decision maker's utility function from possibly inconsistent behavior; inconsistent behavior is interpreted as random deviations from an underlying (true) utility function. The main difference between the two algorithms is that the first facilitates a form of batch learning whereas the second focuses on adaptation and is particularly well-suited for scenarios where the DM's preferences change over time. Empirical results demonstrate the tractability of the algorithms, and they also show that the algorithms converge toward the true utility function for even very small sets of observations. 相似文献
Fecal samples from 335 dairy farm residents and 1458 cattle on 80 farms were tested for Vero cytotoxin (VT)-producing Escherichia coli (VTEC). Residents were also tested for antibodies to VT1 and O157 lipopolysaccharide (LPS). Residents and cattle on farms with VTEC-positive persons or E. coli O157:H7-positive cattle were retested. Twenty-one persons (6.3%) on 16 farms (20.8%) and 46% of cattle on 100% of the farms had VTEC in fecal samples. Human VTEC isolates included E. coli O157:H7 and 8 other serotypes, 4 of which were present in cattle on the same farms. More persons had antibodies to VT1 (41%) than to O157 LPS (12.5%). Seropositivity to O157 LPS was associated with isolation of E. coli O157:H7 on the farm (P = .022). Human VTEC infection was negatively associated with age (P < .05) and was not associated with clinical illness. Many dairy farm residents experience subclinical immunizing VTEC infections at a young age, which frequently involve non-O157 VTEC found in cattle. 相似文献
The authors use their different work and training background as perspectives for discussing the changing world of graphical design. In this digital world, expert knowledge of the technology is essential, but the authors argue that a consequence of prioritising technical expertise in the new interdisciplinary IT- design studies may be a loss of quality in design. To improve the depth in design the authors introduce a psychological framework for design work and on this basis they suggest a systematic method. The method is a 4-step conceptual model: reason, function, emotion and senses, and technology. The model interacts with all phases of the design, and functions as a generating-creating and execution-evaluation system. The authors emphasise that evaluation is embedded throughout the design process. They suggest that a systematic method becomes the guiding tool in interdisciplinary design education, an essential part of the students design qualifications as well as an essential tool in design work. 相似文献
This paper investigates the well-known model for unsteady friction developed by Zielke in 1968. The model is based on weights of past local bulk accelerations and is analytically correct for laminar flow, but computationally demanding. Different models have been proposed using dynamic properties, typically based on instantaneous accelerations (IAB) that are more rapid in computational schemes. Unfortunately, they are not as accurate as Zielke’s model and fail to model certain types of transients. This paper points out that the water hammer transient is dominated by a periodicity varying along the pipe. Because of this, the unsteady friction calculated by the Zielke model is distributed nonuniformly along the pipe, and changes in the pipe length change the local unsteady friction. This phenomenon may explain why IAB models using calibrated coefficients to match experimental results have a large span in value for the reported coefficients. This paper will hopefully contribute to further work to find highly accurate and rapid models. The subject deserves to be brought up for discussion as a part of a total understanding of the problem. 相似文献
The goal in the field of modeling of hydraulic transients is a comprehensive model for pipe networks that is computationally fast and accurate. The fastest models are the one-dimensional (1D) models that use instantaneous acceleration–based (IAB) properties, but unfortunately these models are not as accurate as the more demanding 1D convolution-based (CB) models or quasi two-dimensional models. Focusing on a single pipe, this paper investigates the fundamental behavior of the much more accurate 1D CB model to find two coefficients for use with the two-coefficient formulation of the much-used modified IAB (MIAB) model for complete closing of a downstream valve. Two coefficients are found based on the weighting function used in the CB model, and these coefficients vary along the pipe length. Simulations are compared with two experimental results from tests performed at University of Adelaide in Australia in 1995. The experimental results are for different initial Reynolds numbers of approximately 2,000 and 5,800. The results show very good agreement between simulations and experiments. The improvement of the MIAB model is not general, and for the time being, only complete closure of a downstream valve in a single pipeline at low Reynolds numbers has been investigated. 相似文献
In the present study, the use of gas chromatography mass spectrometry (GC–MS)-based metabonomics to characterize blood serum in an intervention study of patients suffering from the common gastrointestinal disorder irritable bowel syndrome (IBS) was investigated. The patients included in the study consumed an acidified milk product with (n = 30) or without probiotics (n = 31) (Lactobacillus paracasei F19, Lactobacillus acidophilus LA-5 and Bifidobacterium lactis BB-12) for an 8-week period, and blood serum samples were collected before and after the intervention. Acidified milk is commonly used as a delivering vector for probiotics in commercial consumer settings. The serum samples were extracted and derivatized using N-Methyl-N-(trimethylsilyl) trifluoroacetamide (MSTFA), and GC–MS analysis was carried out. Multivariate data analysis including principal component analysis (PCA), orthogonal partial least squares-discriminant analysis (OPLS-DA), and S-plot was applied on the obtained GC–MS data, which revealed higher serum lactate, glutamine, proline creatinine/creatine, and aspartic acid levels and lower serum glucose levels after the intervention period for both treatment groups. Consequently, the present study indicated an effect of acidified milk consumption on the plasma metabolite profile, which was independent of a concomitant intake of probiotics. In addition, the present study demonstrates that GC–MS is a useful analytical technique for metabonomics studies of blood serum. 相似文献