BACKGROUND: The high incidence of locoregional recurrences and distant metastases after curative surgery for gastric cancer calls for improved locoregional control and systemic adjuvant treatment. METHODS: In a randomized clinical trial on adjuvant FAM2 chemotherapy, quality of surgery was evaluated by comparing surgical and pathology data. Univariate and multivariate analysis was made to evaluate the effect of prognostic factors on survival and time of recurrence in relation to patients, tumor, and therapy. RESULTS: Of 314 patients randomized from 28 European institutions, 159 comprised the control and 155 the FAM2 group. After a median follow-up of 80 months, no statistically significant difference was found between survivals. However, for recurrence time, treated patients had a significant advantage over controls (p = 0.02). At univariate analysis, statistically significant differences in survival and time to progression emerged for T, N, disease stage and "adequacy" of surgery. The multivariate analysis retained preoperative Hb level, T, N, and "adequacy" of surgery for time of survival; and T, N, "adequacy" of surgery and adjuvant chemotherapy for recurrence time. CONCLUSIONS: Disease stage is the most important prognostic factor. "Adequate" surgery has an important effect. Adjuvant FAM2 delayed time of recurrence, but did not influence overall survival. 相似文献
The classification task usually works with flat and batch learners, assuming problems as stationary and without relations between class labels. Nevertheless, several real-world problems do not assume these premises, i.e., data have labels organized hierarchically and are made available in streaming fashion, meaning that their behavior can drift over time. Existing studies on hierarchical classification do not consider data streams as input of their process, and thus, data is assumed as stationary and handled through batch learners. The same can be said about works on streaming data, as the hierarchical classification is overlooked. Studies concerning each area individually are promising, yet, do not tackle their intersection. This study analyzes the main characteristics of the state-of-the-art works on hierarchical classification for streaming data concerning five aspects: (i) problems tackled, (ii) datasets, (iii) algorithms, (iv) evaluation metrics, and (v) research gaps in the area. We performed a systematic literature review of primary studies and retrieved 3,722 papers, of which 42 were identified as relevant and used to answer the aforementioned research questions. We found that the problems handled by hierarchical classification of data streams include mainly classification of images, human activities, texts, and audio; the datasets are mostly created or synthetic data; the algorithms and evaluation metrics are well-known techniques or based on those; and research gaps are related to dynamic context, data complexity, and computational resources constraints. We also provide implications for future research and experiments to consider common characteristics shared amongst hierarchical classification and data stream classification.
In the present work, we propose a theoretical model to identify and prioritize risks involved in a biofuel supply chain. We adopt a set of indicators associated with determinant factors of the supply chain to identify risks that are characterized through a risk matrix. We consider the five largest world biodiesel producers and included China due to its global market importance and potential impacts of its growth on the environment and society. To determine the impacts and the probability of occurrence of risks, we use the Canberra distance, as metrics. To facilitate the analysis and interpretation, a convenient manner is to express the results in terms of matrices. To exemplify the potentiality of the scheme and for the sake of simplicity, a more comprehensive discussion is focused on the Brazilian case, restricted to the Technology and Innovation, and Integration, Logistics and Infrastructure determining factors (dimensions) of the biodiesel supply chain. Concerning these determining factors, the Brazilian biodiesel chain shows strong vulnerability when compared with developed and developing countries, despite that the evolution of the data over recent years indicates small improvements in Integration, Logistics and Infrastructure dimension. Although in this work the calculations are restricted to the Canberra distance, the present approach may be applied to other distances to compare or validate the results. This work presents a contribution to model vulnerability to risks, providing to policy makers and stakeholders a tool to design, analyze and improve sustainability system by measuring its risks. The study of the contribution of each indicator suggests corrections to be taken and which indicators should be prioritized. 相似文献
The move to IP Protocol Television (IPTV) has challenged the traditional television industry by opening the Internet to high
quality real time television content delivery. Thus it has provided an enabling set of key technologies to understand and
foster further innovations in the multimedia landscape and to create dynamics in the TV value chain. This editorial provides
a brief overview of this special issue. It begins with a short introduction to IPTV Technology and then summarizes the main
contributions of the selected papers for this special issue, highlighting their salient features and novel results. 相似文献
In this work we present a general (mono and multiobjective) optimization framework for the technological improvement of biochemical systems. The starting point of the method is a mathematical model in ordinary differential equations (ODEs) of the investigated system, based on qualitative biological knowledge and quantitative experimental data. In the method we take advantage of the special structural features of a family of ODEs called power-law models to reduce the computational complexity of the optimization program. In this way, the genetic manipulation of a biochemical system to meet a certain biotechnological goal can be expressed as an optimization program with some desired properties such as linearity or convexity.The general method of optimization is presented and discussed in its linear and geometric programming versions. We furthermore illustrate the use of the method by several real case studies. We conclude that the technological improvement of microorganisms can be afforded using the combination of mathematical modelling and optimization. The systematic nature of this approach facilitates the redesign of biochemical systems and makes this a predictive exercise rather than a trial-and-error procedure. 相似文献
The availability of multicore processors and programmable NICs, such as TOEs (TCP/IP Offloading Engines), provides new opportunities for designing efficient network interfaces to cope with the gap between the improvement rates of link bandwidths and microprocessor performance. This gap poses important challenges related with the high computational requirements associated to the traffic volumes and wider functionality that the network interface has to support. This way, taking into account the rate of link bandwidth improvement and the ever changing and increasing application demands, efficient network interface architectures require scalability and flexibility. An opportunity to reach these goals comes from the exploitation of the parallelism in the communication path by distributing the protocol processing work across processors which are available in the computer, i.e. multicore microprocessors and programmable NICs.Thus, after a brief review of the different solutions that have been previously proposed for speeding up network interfaces, this paper analyzes the onloading and offloading alternatives. Both strategies try to release host CPU cycles by taking advantage of the communication workload execution in other processors present in the node. Nevertheless, whereas onloading uses another general-purpose processor, either included in a chip multiprocessor (CMP) or in a symmetric multiprocessor (SMP), offloading takes advantage of processors in programmable network interface cards (NICs). From our experiments, implemented by using a full-system simulator, we provide a fair and more complete comparison between onloading and offloading. Thus, it is shown that the relative improvement on peak throughput offered by offloading and onloading depends on the rate of application workload to communication overhead, the message sizes, and on the characteristics of the system architecture, more specifically the bandwidth of the buses and the way the NIC is connected to the system processor and memory. In our implementations, offloading provides lower latencies than onloading, although the CPU utilization and interrupts are lower for onloading. Taking into account the conclusions of our experimental results, we propose a hybrid network interface that can take advantage of both, programmable NICs and multicore processors. 相似文献
Wireless networks can vary both the transmission power and modulation of links. Existing routing protocols do not take transmission power control (TPC) and modulation adaptation (also known as rate adaptation – RA) into account at the same time, even though the performance of wireless networks can be significantly improved when routing algorithms use link characteristics to build their routes. This article proposes and evaluates extensions to routing protocols to cope with TPC and RA. The enhancements can be applied to any link state or distance vector routing protocols. An evaluation considering node density, node mobility and link error show that TPC- and RA-aware routing algorithms improve the average latency and the end-to-end throughput, while consuming less energy than traditional protocols. 相似文献
Hosts with several, possibly heterogeneous and/or multicore, processors provide new challenges and opportunities to accelerate
applications with high communications bandwidth requirements. Many opportunities to scale these network applications with the increase in the link bandwidths are related to the exploitation of the available parallelism provided by the presence
of several processing cores in the servers, not only for computing the workload of the user application but also for decreasing
the overhead associated to the network interface and the system software. 相似文献