The algorithm selection problem is defined as identifying the best-performing machine learning (ML) algorithm for a given combination of dataset, task, and evaluation measure. The human expertise required to evaluate the increasing number of ML algorithms available has resulted in the need to automate the algorithm selection task. Various approaches have emerged to handle the automatic algorithm selection challenge, including meta-learning. Meta-learning is a popular approach that leverages accumulated experience for future learning and typically involves dataset characterization. Existing meta-learning methods often represent a dataset using predefined features and thus cannot be generalized across different ML tasks, or alternatively, learn a dataset’s representation in a supervised manner and therefore are unable to deal with unsupervised tasks. In this study, we propose a novel learning-based task-agnostic method for producing dataset representations. Then, we introduce TRIO, a meta-learning approach, that utilizes the proposed dataset representations to accurately recommend top-performing algorithms for previously unseen datasets. TRIO first learns graphical representations for the datasets, using four tools to learn the latent interactions among dataset instances and then utilizes a graph convolutional neural network technique to extract embedding representations from the graphs obtained. We extensively evaluate the effectiveness of our approach on 337 datasets and 195 ML algorithms, demonstrating that TRIO significantly outperforms state-of-the-art methods for algorithm selection for both supervised (classification and regression) and unsupervised (clustering) tasks.
Decades of practice and research suggest that nurse practitioners (NPs) provide cost-effective and high-quality care. Managed care's emphasis on prevention and cost savings led some policy makers to view NPs as a way to meet the need for primary care providers. However, access to and utilization of NPs has increasingly been controlled by managed care organizations (MCOs) through their selection of providers for primary care panels. This study employed qualitative methodology to examine NPs' experiences with MCOs. Three focus groups, comprising 27 NPs in New York and Connecticut, revealed NPs' mixed reactions to managed care and a range of sentiments regarding NPs' efforts to be listed as primary care providers. The results reflected NPs' concerns about their perceived "invisibility," as well as their sense of "invincibility" in the ways in which NPs are responding to the barriers posed by MCOs. They identified barriers to, as well as ways to facilitate, being listed by MCOs, and described the importance of NPs working individually and collectively in negotiating with MCOs. 相似文献
BACKGROUND: Like other areas of health care, critical care faces increasing pressure to improve the quality while reducing the cost of care. Strategies drawn from the literature and the authors' experiences are presented. STRATEGIES AND OPPORTUNITIES FOR IMPROVEMENTS: Ten process- or structure-related areas are targeted as strategically important focuses of improvement: (1) restructuring administrative lines to better suit key processes; (2) physician leadership in critical care units; (3) management training for critical care managers; (4) triage; (5) multidisciplinary critical care; (6) standardization of care; (7) developing alternatives to critical care units; (8) timeliness of care delivery; (9) appropriate use of critical care resources; and (10) tracking quality improvement. TIMELINESS OF CARE DELIVERY: Whatever the root cause(s) of unnecessary delays, the result is inefficient use of critical care resources-and ultimately either a need for more resources or longer wait times. Innovations designed to reduce wait times and waste, such as the establishment of a microchemistry stat laboratory, may prove valuable. APPROPRIATE USE OF CRITICAL CARE RESOURCES: Possible strategies for the appropriate use of critical care resources include better selection of well-informed patients who undergo procedures. Reduction in variation among physicians and organizations in providing therapies will also likely lead to a reduction in some high-risk procedures offering little or no benefit, and therefore a reduction in need for critical care services. Better preparation of patients and families should also make end-of-life decisions easier when questions of "futility" arise. Better information on outcomes and cost-effectiveness and consensus on withdrawal of critical care treatments represent two additional strategies. 相似文献
Much remains to be understood about how low socioeconomic status (SES) increases cardiovascular disease and mortality risk. Data from the Kuopio Ischemic Heart Disease Risk Factor Study (1984-1993) were used to estimate the associations between acute myocardial infarction and income, all-cause mortality, and cardiovascular mortality in a population-based sample of 2,272 Finnish men, with adjustment for 23 biologic, behavioral, psychologic, and social risk factors. Compared with the highest income quintile, those in the bottom quintile had age-adjusted relative hazards of 3.14 (95% confidence interval (CI) 1.77-5.56), 2.66 (95% CI 1.25-5.66), and 4.34 (95% CI 1.95-9.66) for all-cause mortality, cardiovascular mortality, and AMI, respectively. After adjustment for risk factors, the relative hazards for the same comparisons were 1.32 (95% CI 0.70-2.49), 0.70 (95% CI 0.29-1.69), and 2.83 (95% CI 1.14-7.00). In the lowest income quintile, adjustment for risk factors reduced the excess relative risk of all-cause mortality by 85%, that of cardiovascular mortality by 118%, and that of acute myocardial infarction by 45%. These data show how the association between SES and cardiovascular mortality and all-cause mortality is mediated by known risk factor pathways, but full "explanations" for these associations will need to encompass why these biologic, behavioral, psychologic, and social risk factors are differentially distributed by SES. 相似文献
Combinatorial interaction testing (CIT) is a cost-effective sampling technique for discovering interaction faults in highly-configurable
systems. Constrained CIT extends the technique to situations where some features cannot coexist in a configuration, and is
therefore more applicable to real-world software. Recent work on greedy algorithms to build CIT samples now efficiently supports
these feature constraints. But when testing a single system configuration is expensive, greedy techniques perform worse than
meta-heuristic algorithms, because greedy algorithms generally need larger samples to exercise the same set of interactions.
On the other hand, current meta-heuristic algorithms have long run times when feature constraints are present. Neither class
of algorithm is suitable when both constraints and the cost of testing configurations are important factors. Therefore, we
reformulate one meta-heuristic search algorithm for constructing CIT samples, simulated annealing, to more efficiently incorporate
constraints. We identify a set of algorithmic changes and experiment with our modifications on 35 realistic constrained problems
and on a set of unconstrained problems from the literature to isolate the factors that improve performance. Our evaluation
determines that the optimizations reduce run time by a factor of 90 and accomplish the same coverage objectives with even
fewer system configurations. Furthermore, the new version compares favorably with greedy algorithms on real-world problems,
and, though our modifications were aimed at constrained problems, it shows similar advantages when feature constraints are
absent. 相似文献
Many times, even if a crowd simulation looks good in general, there could be some specific individual behaviors which do not seem correct. Spotting such problems manually can become tedious, but ignoring them may harm the simulation's credibility. In this paper we present a data‐driven approach for evaluating the behaviors of individuals within a simulated crowd. Based on video‐footage of a real crowd, a database of behavior examples is generated. Given a simulation of a crowd, an analog analysis is performed on it, defining a set of queries, which are matched by a similarity function to the database examples. The results offer a possible objective answer to the question of how similar are the simulated individual behaviors to real observed behaviors. Moreover, by changing the video input one can change the context of evaluation. We show several examples of evaluating simulated crowds produced using different techniques and comprising of dense crowds, sparse crowds and flocks. 相似文献
In this paper we present new edge detection algorithms which are motivated by recent developments on edge-adapted reconstruction techniques [F. Aràndiga, A. Cohen, R. Donat, N. Dyn, B. Matei, Approximation of piecewise smooth functions and images by edge-adapted (ENO-EA) nonlinear multiresolution techniques, Appl. Comput. Harmon. Anal. 24 (2) (2008) 225–250]. They are based on comparing local quantities rather than on filtering and thresholding. This comparison process is invariant under certain transformations that model light changes in the image, hence we obtain edge detection algorithms which are insensitive to changes in illumination. 相似文献
We propose and study quantitative measures of smoothness f?A(f) which are adapted to anisotropic features such as edges in images or shocks in PDE’s. These quantities govern the rate of approximation by adaptive finite elements, when no constraint is imposed on the aspect ratio of the triangles, the simplest example being \(A_{p}(f)=\|\sqrt{|\mathrm{det}(d^{2}f)|}\|_{L^{\tau}}\) which appears when approximating in the Lp norm by piecewise linear elements when \(\frac{1}{\tau}=\frac{1}{p}+1\). The quantities A(f) are not semi-norms, and therefore cannot be used to define linear function spaces. We show that these quantities can be well defined by mollification when f has jump discontinuities along piecewise smooth curves. This motivates for using them in image processing as an alternative to the frequently used total variation semi-norm which does not account for the smoothness of the edges. 相似文献
This paper deals with compact label-based representations for trees. Consider an n-node undirected connected graph G with a predefined numbering on the ports of each node. The all-ports tree labeling ℒall gives each node v of G a label containing the port numbers of all the tree edges incident to v. The upward tree labeling ℒup labels each node v by the number of the port leading from v to its parent in the tree. Our measure of interest is the worst case and total length of the labels used by the scheme, denoted
Mup(T) and Sup(T) for ℒup and Mall(T) and Sall(T) for ℒall. The problem studied in this paper is the following: Given a graph G and a predefined port labeling for it, with the ports of each node v numbered by 0,…,deg (v)−1, select a rooted spanning tree for G minimizing (one of) these measures. We show that the problem is polynomial for Mup(T), Sup(T) and Sall(T) but NP-hard for Mall(T) (even for 3-regular planar graphs). We show that for every graph G and port labeling there exists a spanning tree T for which Sup(T)=O(nlog log n). We give a tight bound of O(n) in the cases of complete graphs with arbitrary labeling and arbitrary graphs with symmetric port labeling. We conclude by
discussing some applications for our tree representation schemes.
A preliminary version of this paper has appeared in the proceedings of the 7th International Workshop on Distributed Computing
(IWDC), Kharagpur, India, December 27–30, 2005, as part of Cohen, R. et al.: Labeling schemes for tree representation. In:
Proceedings of 7th International Workshop on Distributed Computing (IWDC), Lecture Notes of Computer Science, vol. 3741, pp. 13–24
(2005).
R. Cohen supported by the Pacific Theaters Foundation.
P. Fraigniaud and D. Ilcinkas supported by the project “PairAPair” of the ACI Masses de Données, the project “Fragile” of
the ACI Sécurité et Informatique, and by the project “Grand Large” of INRIA.
A. Korman supported in part by an Aly Kaufman fellowship.
D. Peleg supported in part by a grant from the Israel Science Foundation. 相似文献
In this paper, we present a new method for segmenting closed contours and surfaces. Our work builds on a variant of the minimal
path approach. First, an initial point on the desired contour is chosen by the user. Next, new keypoints are detected automatically
using a front propagation approach. We assume that the desired object has a closed boundary. This a-priori knowledge on the
topology is used to devise a relevant criterion for stopping the keypoint detection and front propagation. The final domain
visited by the front will yield a band surrounding the object of interest. Linking pairs of neighboring keypoints with minimal
paths allows us to extract a closed contour from a 2D image. This approach can also be used for finding an open curve giving
extra information as stopping criteria. Detection of a variety of objects on real images is demonstrated. Using a similar
idea, we can extract networks of minimal paths from a 3D image called Geodesic Meshing. The proposed method is applied to
3D data with promising results.