首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Nowadays, Android has been widely installed in mobile devices, including smart-phones, tablet PCs, and PMPs, that are equipped with a wealth of standard capabilities, such as touchpad display, wireless communications, GPS, and memory-based storage. In Android systems, most users perform many activities not only phone calls, but also internet browsing, emailing, playing games, taking and sharing pictures, downloading and reading mobile e-books, and using multimedia applications. These activities and user responsiveness are mainly affected by Android IO subsystems since the Android framework uses many IO operations with framework components. In Android-based systems, IO subsystems are organized with both internal system storage and external data storage. We analyze storage traffic of three android killer applications, Gallery, Web browsing, and SNS services, by profiling storage IO for various user activities. Based on this analysis, we summarize the statistical pattern of various Android applications, and show the effect of the proposed IO write buffering scheme which reduces the number of writes for the storage system and increases latency.  相似文献   

2.
Kidney failure is a major health problem worldwide. Patients with end-stage renal disease require intensive medical support by dialysis or kidney transplantation. Current methods for diagnosis of kidney disease are either invasive or insensitive, and renal function may decline by as much as 50% before it can be detected using current techniques. The goal of this study was, therefore, to identify biomarkers of kidney disease (associated with renal fibrosis) that can be used for the development of a non-invasive clinical test for early disease detection. We utilized two protein-profiling technologies (SELDI-TOF MS and 2-D) to screen the plasma and kidney proteome for aberrantly expressed proteins in an experimental mouse model of unilateral uretric obstruction, which mimics the pathology of human renal disease. Several differentially regulated proteins were detected at the plasma level of day-3-obstructed animals, which included serum amyloid A1, fibrinogen α, haptoglobin precursor protein, haptoglobin and major urinary proteins 11 and 8. Differentially expressed proteins detected at the tissue level included ras-like activator protein 2, haptoglobin precursor protein, malate dehydrogenase, α enolase and murine urinary protein (all p<0.05 versus controls). Immunohistochemistry was used to confirm the up-regulation of fibrinogen. Interestingly, these proteins are largely separated into four major classes: (i) acute-phase reactants (ii) cell-signaling molecules (iii) molecules involved in cell growth and metabolism and (iv) urinary proteins. These results provide new insights into the pathology of obstructive nephropathy and may facilitate the development of specific assay(s) to detect and monitor renal fibrosis.  相似文献   

3.
提出了基于传递函数模型的掘进自动截割成形控制系统动态特性分析方法,阐述了系统组成及主体结构,得出了系统中每一个基本元件组成环节的传递函数模型。对掘进机截割头水平运动时系统的动态特性进行了仿真实验,结果表明,系统传递函数是由比例、积分和二阶振荡环节组成的开环传递函数,为Ⅰ型系统;设计可调增益Kp=14时,系统输出响应效果较好,无超调,曲线平滑,且上升时间较短,为0.428s,调节时间为0.797s;系统不存在零点,其特征根都在S平面左半平面,具有正的幅值裕量和相位裕量,稳定性较好。对系统进行了现场试验,结果表明截割头垂直截割误差小于40mm,巷道截割断面单边定位精度小于50mm,巷道截割断面重复精度小于20mm。  相似文献   

4.
Concept assignment identifies units of source code that are functionally related, even if this is not apparent from a syntactic point of view. Until now, the results of concept assignment have only been used for static analysis, mostly of program source code. This paper investigates the possibility of using concept information within a framework for dynamic analysis of programs. The paper presents two case studies involving a small Java program used in a previous research exercise, and a large Java virtual machine (the popular Jikes RVM system). These studies investigate two applications of dynamic concept information: visualization and profiling. The paper demonstrates two different styles of concept visualization, which show the proportion of overall time spent in each concept and the sequence of concept execution, respectively. The profiling study concerns the interaction between runtime compilation and garbage collection in Jikes RVM. For some benchmark cases, we are able to obtain a significant reduction in garbage collection time. We discuss how this phenomenon might be harnessed to optimize the scheduling of garbage collection in Jikes RVM.  相似文献   

5.
Data envelopment analysis (DEA) uses extreme observations to identify superior performance, making it vulnerable to outliers. This paper develops a unified model to identify both efficient and inefficient outliers in DEA. Finding both types is important since many post analyses, after measuring efficiency, depend on the entire distribution of efficiency estimates. Thus, outliers that are distinguished by poor performance can significantly alter the results. Besides allowing the identification of outliers, the method described is consistent with a relaxed set of DEA axioms. Several examples demonstrate the need for identifying both efficient and inefficient outliers and the effectiveness of the proposed method. Applications of the model reveal that observations with low efficiency estimates are not necessarily outliers. In addition, a strategy to accelerate the computation is proposed that can apply to influential observation detection.  相似文献   

6.
Docking-based virtual screening is an established component of structure-based drug discovery. Nevertheless, scoring and ranking of computationally docked ligand libraries still suffer from many false positives. Identifying optimal docking parameters for a target protein prior to virtual screening can improve experimental hit rates. Here, we examine protocols for virtual screening against the important but challenging class of drug target, protein tyrosine phosphatases. In this study, common interaction features were identified from analysis of protein–ligand binding geometries of more than 50 complexed phosphatase crystal structures. It was found that two interactions were consistently formed across all phosphatase inhibitors: (1) a polar contact with the conserved arginine residue, and (2) at least one interaction with the P-loop backbone amide. In order to investigate the significance of these features on phosphatase-ligand binding, a series of seeded virtual screening experiments were conducted on three phosphatase enzymes, PTP1B, Cdc25b and IF2. It was observed that when the conserved arginine and P-loop amide interactions were used as pharmacophoric constraints during docking, enrichment of the virtual screen significantly increased in the three studied phosphatases, by up to a factor of two in some cases. Additionally, the use of such pharmacophoric constraints considerably improved the ability of docking to predict the inhibitor's bound pose, decreasing RMSD to the crystallographic geometry by 43% on average. Constrained docking improved enrichment of screens against both open and closed conformations of PTP1B. Incorporation of an ordered water molecule in PTP1B screening was also found to generally improve enrichment. The knowledge-based computational strategies explored here can potentially inform structure-based design of new phosphatase inhibitors using docking-based virtual screening.  相似文献   

7.
The classic Data Envelopment Analysis (DEA) models developed with the assumption that all inputs and outputs are non-negative, whereas, we may face a case with negative data in the actual business world. So, the need to adapt the DEA models so that they are applicable to cases includes inputs and outputs which can take both negative and non-negative values has been an issue. It can be readily demonstrated that the assumption of constant returns to scale (CRS) is not possible in technologies under negative data. So, one of the interesting and challenge questions is how to determine the state of RTS in the presence of negative data under variable returns to scale (VRS) technology. Accordingly, in this contribution, we first address the efficiency measure and then suggest a method to discover the state of returns to scale (RTS) in the presence of negative input and output values which has not been discussed much enough so far in DEA literature. Finally, the main results are elaborated by some illustrative examples.  相似文献   

8.
A data file on the geochemistry of ferromanganese deposits has been constituted under the joint effort a group of individuals belonging to several institutions: Ecole des Mines, Paris, Woods Hole Oceanographic Institution, U.S.A., and CNEXO, France.We present in this paper the file management and coding schemes which have been used regarding this world-wide compilation as well as some results of a multidimensional statistical technique which is suitable for revealing the global trends existing in the geochemistry of manganese nodules.Various computer programs have been developed in order to display the recorded data using criteria such as: geographical location, type of metal analyzed, and name of the author.Some crude statistical schemes also can be selected in order to summarize the information related to samples which have the same location or which are close to one another. The available statistics are of three types: average, maximum, and minimum.A multidimensional statistical analysis termed correspondence analysis has been used to account for the similarities between the sampled nodules in view of the entire set of chemical elements analyzed.The results are displayed as two-dimensional diagrams where the samples and the chemical elements are represented as points. The distance between two points is a measure of the correlation existing between the associated samples or elements; the shorter the distance the higher the correlation.Correspondence analysis is a powerful instrument in the study of the various aspects of the geochemistry of manganese nodules. It can help in pinpointing the main factors which influence the affinities between sampled nodules in the entire set of variables.  相似文献   

9.
One of the most powerful, popular and accurate classification techniques is support vector machines (SVMs). In this work, we want to evaluate whether the accuracy of SVMs can be further improved using training set selection (TSS), where only a subset of training instances is used to build the SVM model. By contrast to existing approaches, we focus on wrapper TSS techniques, where candidate subsets of training instances are evaluated using the SVM training accuracy. We consider five wrapper TSS strategies and show that those based on evolutionary approaches can significantly improve the accuracy of SVMs.  相似文献   

10.
In oral mucosa lesions it is frequently difficult to differentiate between precursor lesions and already manifest oral squamous cell carcinoma. Therefore, multiple scalpel biopsies are necessary to detect tumor cells already in early stages and to guarantee an accurate follow‐up. We analyzed oral brush biopsies (n = 49) of normal mucosa, inflammatory and hyperproliferative lesions, and oral squamous cell carcinoma with ProteinChip Arrays (SELDI) as a non‐invasive method to characterize putative tumor cells. Three proteins were found that differentiated between these three stages. These three proteins are able to distinguish between normal cells and tumor cells with a sensitivity of 100% and specificity of 91% and can distinguish inflammatory/hyperproliferative lesions from tumor cells with a sensitivity of up to 91% and specificity of up to 90%. Two of these proteins have been identified by immunodepletion as S100A8 and S100A9 and this identification was confirmed by immunocytochemistry. For the first time, brush biopsies have been successfully used for proteomic biomarker discovery. The identified protein markers are highly specific for the distinction of the three analyzed stages and therewith reflect the progression from normal to premalignant non‐dysplastic and finally to tumor tissue. This knowledge could be used as a first diagnostic step in the monitoring of mucosal lesions.  相似文献   

11.
Objective: Information Retrieval (IR) is strongly rooted in experimentation where new and better ways to measure and interpret the behavior of a system are key to scientific advancement. This paper presents an innovative visualization environment: Visual Information Retrieval Tool for Upfront Evaluation (VIRTUE), which eases and makes more effective the experimental evaluation process.Methods: VIRTUE supports and improves performance analysis and failure analysis.Performance analysis: VIRTUE offers interactive visualizations based on well-known IR metrics allowing us to explore system performances and to easily grasp the main problems of the system.Failure analysis: VIRTUE develops visual features and interaction, allowing researchers and developers to easily spot critical regions of a ranking and grasp possible causes of a failure.Results: VIRTUE was validated through a user study involving IR experts. The study reports on (a) the scientific relevance and innovation and (b) the comprehensibility and efficacy of the visualizations.Conclusion: VIRTUE eases the interaction with experimental results, supports users in the evaluation process and reduces the user effort.Practice: VIRTUE will be used by IR analysts to analyze and understand experimental results.Implications: VIRTUE improves the state-of-the-art in the evaluation practice and integrates visualization and IR research fields in an innovative way.  相似文献   

12.
Many recent papers have dealt with the application of feedforward neural networks in financial data processing. This powerful neural model can implement very complex nonlinear mappings, but when outputs are not available or clustering of patterns is required, the use of unsupervised models such as self-organizing maps is more suitable. The present work shows the capabilities of self-organizing feature maps for the analysis and representation of financial data and for aid in financial decision-making. For this purpose, we analyse the Spanish banking crisis of 1977–1985 and the Spanish economic situation in 1990 and 1991, making use of this unsupervised model. Emphasis is placed on the analysis of the synaptic weights, fundamental for delimiting regions on the map, such as bankrupt or solvent regions, where similar companies are clustered. The time evolution of the companies and other important conclusions can be drawn from the resulting maps.Characters and symbols used and their meaning nx x dimension of the neuron grid, in number of neurons - ny y dimension of the neuron grid, in number of neurons - n dimension of the input vector, number of input variables - (i, j) indices of a neuron on the map - k index of the input variables - w ijk synaptic weight that connects thek input with the (i, j) neuron on the map - W ij weight vector of the (i, j) neuron - x k input vector - X input vector - (t) learning rate - o starting learning rate - f final learning rate - R(t) neighbourhood radius - R0 starting neighbourhood radius - R f final neighbourhood radius - t iteration counter - t rf number of iterations until reachingR f - t f number of iterations until reaching f - h(·) lateral interaction function - standard deviation - for every - d (x, y) distance between the vectors x and y  相似文献   

13.
There are several purposes of analyzing a program:functional or performance analysis,debugging or,more recently,mapping a program to a new paraller or distributed architecture.In this paper,we introduce an effective method leading to the Execution Graph (EG)from a program.First,the Unix profiling tool Gprof is used to get the Execution Model(EM)of a C-program.Then the event-driven monitoring tool AICOS-SIMPLE is used to get the EG which includes not only the call graph but also the execution time table of the program.This method is suitable for analyzing modern distributed programs.As the example of the analysis,the well known HTTP protocol under the NCSA Mosaic is chosen.Ae EG of NCSA Mosaic on the routing level is given.  相似文献   

14.
Recently, manufacturing companies have been attempting to increase competitiveness in their business collaboration with cooperative companies rather than within their own companies. In order to facilitate their collaboration, they are attempting to adopt or already using a collaboration system, which supports a number of functions and services. However, it is very difficult to apply existing systems into other organizations or industrial sections without customization or reconfiguration because functional or service requirements of users usually differ according to their domain knowledge. In order to re-apply and disseminate an existing system to other companies, therefore, the system must be reconfigured by modifying, upgrading, or newly developing some portions of the system. During the customization processes, functions or services of the system must be refined in order to satisfy user requirements. For facilitating the reconfiguration of collaboration systems, in this paper, we first define user patterns, and subsequently propose a method for investigating and analyzing patterns based on data mining approach. The proposed method validates normal versus abnormal patterns that show a drastic increase in the use of a specific function or service, and automatically makes the system recognize abnormal patterns as new normal patterns when abnormal patterns continue for a long time. We conduct experiments and comparison studies using an Apriori-like approach in order to establish the effectiveness of the proposed method. We also suggest a guideline for the reconfiguration of function modules or services with a specific collaboration system.  相似文献   

15.
Land change modelers often create future maps using reference land use map. However, future land use maps may mislead decision-makers, who are often unaware of the sensitivity and the uncertainty in land use maps due to error in data. Since most metrics that communicate uncertainty require using reference land use data to calculate accuracy, the assessment of uncertainty becomes challenging when no reference land use map for future is available. This study aims to develop a new conceptual framework for sensitivity analysis and uncertainty assessment (FSAUA) which compares multiple maps under various data error scenarios. FSAUA performs sensitivity analyses in land use maps using a reference map and assess uncertainty in predicted maps. FSAUA was applied using three well-known land change models (ANN, CART and MARS) in Delhi, India. FSAUA was found to be a practical tool for communicating the uncertainty with end-users who develop reliable planning decisions.  相似文献   

16.
A design process, whether for a product or for a service, is composed of a large number of activities connected by data and information exchanges. The quality of these exchanges, called in this paper collaboration, requires the ability to exchange useful, understandable and unambiguous data and information to the different designers involved. In this paper, a global framework is first set for process/product performance management. Then, the research question focuses on the definition and evaluation of the performance of collaborations, and by extension, of the design process in its entirety. This performance evaluation requires the definition of several key elements such as object to evaluate, the performance criteria, indicators and action variables. In order to define the object of evaluation, this paper relies on a literature study on collaboration resulting in an ECORE meta-model of collaborative processes. The collaboration performance measurement is for its part based on the concept of interoperability. This measure estimates the technical and conceptual interoperability of the different pairwise collaborations. The paper is concluded by proposing a tooled methodology for collaborations’ performance evaluation including two main phases: process modeling and interoperability measurement. Tooling is provided through the Eclipse Modeling Framework (EMF) using its (meta-) model edition, constraint validation and model comparison features. The applicability of the methodology is also illustrated using a case study in design.  相似文献   

17.
M. Woodman  D. C. Ince 《Software》1985,15(11):1057-1072
This paper describes a portable software tool used for the processing and maintenance of data flow diagrams which form the basis of structured analysis techniques. The tool itself is based on the idea that data flow diagrams can be modelled by means of semantic nets and can be manipulated by a semantic net processor. A major feature of the tool is the facilities it provides for the maintenance programmer.  相似文献   

18.
One of the most important activities of strategic planning in a health-care system is the effective allocation of scarce resources. Most of such strategies are attempting to create more efficient systems based on better organizational and management structures. Therefore, it is necessary to develop systematic models and evaluation methods that will support a strategic planning process that addresses issues such as the location of services and the effective use of resources such as equipment, funds or workforce. Such modeling approaches need to quantify the effect of changes in the location of providers, the opening or closure of providers, and the dynamic transformations of the services offered at each provider. In this paper we propose a methodology that takes into account health service provider efficiencies based on multiple measures. These efficiencies are then employed to determine health providers' locations and service allocations, which include new services distribution as well as existing services redistribution. This approach employs data envelopment analysis (DEA) and integer programming (IP) location allocation models and can be used as both an immediate evaluation tool and a long-term planning aid.  相似文献   

19.
This research project investigates the ability of neural networks, specifically, the backpropagation algorithm, to integrate fundamental and technical analysis for financial performance prediction. The predictor attributes include 16 financial statement variables and 11 macroeconomic variables. The rate of return on common shareholders' equity is used as the to-be-predicted variable. Financial data of 364 S&P companies are extracted from the CompuStat database, and macroeconomic variables are extracted from the Citibase database for the study period of 1985–1995. Used as predictors in Experiments 1, 2, and 3 are the 1 year's, the 2 years', and the 3 years' financial data, respectively. Experiment 4 has 3 years' financial data and macroeconomic data as predictors. Moreover, in order to compensate for data noise and parameter misspecification as well as to reveal prediction logic and procedure, we apply a rule extraction technique to convert the connection weights from trained neural networks to symbolic classification rules. The performance of neural networks is compared with the average return from the top one-third returns in the market (maximum benchmark) that approximates the return from perfect information as well as with the overall market average return (minimum benchmark) that approximates the return from highly diversified portfolios. Paired t tests are carried out to calculate the statistical significance of mean differences. Experimental results indicate that neural networks using 1 year's or multiple years' financial data consistently and significantly outperform the minimum benchmark, but not the maximum benchmark. As for neural networks with both financial and macroeconomic predictors, they do not outperform the minimum or maximum benchmark in this study. The experimental results also show that the average return of 0.25398 from extracted rules is the only compatible result to the maximum benchmark of 0.2786. Consequentially, we demonstrate rule extraction as a postprocessing technique for improving prediction accuracy and for explaining the prediction logic to financial decision makers.  相似文献   

20.
Recent studies have indicated that companies are increasingly experiencing Data Quality (DQ) related problems as more complex data are being collected. To address such problems, the literature suggests the implementation of a Total Data Quality Management Program (TDQM) that should consist of the following phases: DQ definition, measurement, analysis and improvement. As such, this paper performs an empirical study using a questionnaire that was distributed to financial institutions worldwide to identify the most important DQ dimensions, to assess the DQ level of credit risk databases using the identified DQ dimensions, to analyze DQ issues and to suggest improvement actions in a credit risk assessment context. This questionnaire is structured according to the framework of Wang and Strong and incorporates three additional DQ dimensions that were found to be important to the current context (i.e., actionable, alignment and traceable). Additionally, this paper contributes to the literature by developing a scorecard index to assess the DQ level of credit risk databases using the DQ dimensions that were identified as most important. Finally, this study explores the key DQ challenges and causes of DQ problems and suggests improvement actions. The findings from the statistical analysis of the empirical study delineate the nine most important DQ dimensions, which include accuracy and security for assessing the DQ level.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号