Proteomics analysis of serum from patients with type 1 diabetes (T1D) may lead to novel biomarkers for prediction of disease and for patient monitoring. However, the serum proteome is highly sensitive to sample processing and before proteomics biomarker research serum cohorts should preferably be examined for potential bias between sample groups. SELDI‐TOF MS protein profiling was used for preliminary evaluation of a biological‐bank with 766 serum samples from 270 patients with T1D, collected at 18 different paediatric centers representing 15 countries in Europe and Japan over 2 years (2000–2002). Samples collected 1 (n = 270), 6 (n = 248), and 12 (n = 248) months after T1D diagnosis were grouped across centers and compared. The serum protein profiles varied with collection site and day of analysis; however, markers of sample processing were not systematically different between samples collected at different times after diagnosis. Three members of the apolipoprotein family increased with time in patient serum collected 1, 6, and 12 months after diagnosis (ANOVA, p<0.001). These results support the use of this serum cohort for further proteomic studies and illustrate the potential of high‐throughput MALDI/SELDI‐TOF MS protein profiling for evaluation of serum cohorts before proteomics biomarker research. 相似文献
The proposed European Artificial Intelligence Act (AIA) is the first attempt to elaborate a general legal framework for AI carried out by any major global economy. As such, the AIA is likely to become a point of reference in the larger discourse on how AI systems can (and should) be regulated. In this article, we describe and discuss the two primary enforcement mechanisms proposed in the AIA: the conformity assessments that providers of high-risk AI systems are expected to conduct, and the post-market monitoring plans that providers must establish to document the performance of high-risk AI systems throughout their lifetimes. We argue that the AIA can be interpreted as a proposal to establish a Europe-wide ecosystem for conducting AI auditing, albeit in other words. Our analysis offers two main contributions. First, by describing the enforcement mechanisms included in the AIA in terminology borrowed from existing literature on AI auditing, we help providers of AI systems understand how they can prove adherence to the requirements set out in the AIA in practice. Second, by examining the AIA from an auditing perspective, we seek to provide transferable lessons from previous research about how to refine further the regulatory approach outlined in the AIA. We conclude by highlighting seven aspects of the AIA where amendments (or simply clarifications) would be helpful. These include, above all, the need to translate vague concepts into verifiable criteria and to strengthen the institutional safeguards concerning conformity assessments based on internal checks.
Numerous numerical methods have been developed in an effort to accurately predict stresses in bones. The largest group are variants of the h-version of the finite element method (h-FEM), where low order Ansatz functions are used. By contrast, we3 investigate a combination of high order FEM and a fictitious domain approach, the finite cell method (FCM). While the FCM has been verified and validated in previous publications, this article proposes methods on how the FCM can be made computationally efficient to the extent that it can be used for patient specific, interactive bone simulations. This approach is called computational steering and allows to change input parameters like the position of an implant, material or loads and leads to an almost instantaneous change in the output (stress lines, deformations). This direct feedback gives the user an immediate impression of the impact of his actions to an extent which, otherwise, is hard to obtain by the use of classical non interactive computations. Specifically, we investigate an application to pre-surgical planning of a total hip replacement where it is desirable to select an optimal implant for a specific patient. Herein, optimal is meant in the sense that the expected post-operative stress distribution in the bone closely resembles that before the operation. 相似文献
Maintaining an awareness of the working context of fellow co-workers is crucial to successful cooperation in a workplace.
For mobile, non co-located workers, however, such workplace awareness is hard to maintain. This paper investigates how context-aware
computing can be used to facilitate workplace awareness. In particular, we present the concept of Context-Based Workplace Awareness, which is derived from years of in-depth studies of hospital work and the design of computer supported cooperative work technologies
to support the distributed collaboration and coordination of clinical work within large hospitals. This empirical background
has revealed that an awareness especially of the social, spatial, temporal, and activity context plays a crucial role in the coordination of work in hospitals. The paper then presents and discusses technologies designed
to support context-based workplace awareness, namely the AWARE architecture, and the AwarePhone and AwareMedia applications.
Based on almost 2 year’ deployment of the technologies in a large hospital, the paper discuss how the four dimension of context-based
workplace awareness play out in the coordination of clinical work. 相似文献
Iterative Feedback Tuning constitutes an attractive control loop tuning method for processes in the absence of an accurate process model. It is a purely data driven approach aiming at optimizing the closed loop performance. The standard formulation ensures an unbiased estimate of the loop performance cost function gradient with respect to the control parameters. This gradient is important in a search algorithm. The extension presented in this paper further ensures informative data to improve the convergence properties of the method and hence reduce the total number of required plant experiments especially when tuning for disturbance rejection. Informative data is achieved through application of an external probing signal in the tuning algorithm. The probing signal is designed based on a constrained optimization which utilizes an approximate black box model of the process. This model estimate is further used to guarantee nominal stability and to improve the parameter update using a line search algorithm for determining the iteration step size. The proposed algorithm is compared to the classical formulation in a simulation study of a disturbance rejection problem. This type of problem is notoriously difficult for Iterative Feedback Tuning due to the lack of excitation from the reference. 相似文献
Appearance-based methods, based on statistical models of the pixel values in an image (region) rather than geometrical object models, are increasingly popular in computer vision. In many applications, the number of degrees of freedom (DOF) in the image generating process is much lower than the number of pixels in the image. If there is a smooth function that maps the DOF to the pixel values, then the images are confined to a low-dimensional manifold embedded in the image space. We propose a method based on probabilistic mixtures of factor analyzers to 1) model the density of images sampled from such manifolds and 2) recover global parameterizations of the manifold. A globally nonlinear probabilistic two-way mapping between coordinates on the manifold and images is obtained by combining several, locally valid, linear mappings. We propose a parameter estimation scheme that improves upon an existing scheme and experimentally compare the presented approach to self-organizing maps, generative topographic mapping, and mixtures of factor analyzers. In addition, we show that the approach also applies to finding mappings between different embeddings of the same manifold. 相似文献
Abstract. This paper reports from a case study of an organization that implements a software metrics program to measure the effects of its improvement efforts. The program measures key indicators of all completed projects and summarizes progress information in a quarterly management report. The implementation turns out to be long and complex, as the organization is confronted with dilemmas based on contradictory demands and value conflicts. The process is interpreted as a combination of a rational engineering process in which a metrics program is constructed and put into use, and an evolutionary cultivation process in which basic values of the software organization are confronted and transformed. The analysis exemplifies the difficulties and challenges that software organizations face when bringing known principles for software metrics programs into practical use. The article discusses the insights gained from the case in six lessons that may be used by Software Process Improvement managers in implementing a successful metrics program. 相似文献