Probabilistic topic modeling algorithms like Latent Dirichlet Allocation (LDA) have become powerful tools for the analysis of large collections of documents (such as papers, projects, or funding applications) in science, technology an innovation (STI) policy design and monitoring. However, selecting an appropriate and stable topic model for a specific application (by adjusting the hyperparameters of the algorithm) is not a trivial problem. Common validation metrics like coherence or perplexity, which are focused on the quality of topics, are not a good fit in applications where the quality of the document similarity relations inferred from the topic model is especially relevant. Relying on graph analysis techniques, the aim of our work is to state a new methodology for the selection of hyperparameters which is specifically oriented to optimize the similarity metrics emanating from the topic model. In order to do this, we propose two graph metrics: the first measures the variability of the similarity graphs that result from different runs of the algorithm for a fixed value of the hyperparameters, while the second metric measures the alignment between the graph derived from the LDA model and another obtained using metadata available for the corresponding corpus. Through experiments on various corpora related to STI, it is shown that the proposed metrics provide relevant indicators to select the number of topics and build persistent topic models that are consistent with the metadata. Their use, which can be extended to other topic models beyond LDA, could facilitate the systematic adoption of this kind of techniques in STI policy analysis and design.
Pan-Gyn cancers entail 1 in 5 cancer cases worldwide, breast cancer being the most commonly diagnosed and responsible for most cancer deaths in women. The high incidence and mortality of these malignancies, together with the handicaps of taxanes—first-line treatments—turn the development of alternative therapeutics into an urgency. Taxanes exhibit low water solubility that require formulations that involve side effects. These drugs are often associated with dose-limiting toxicities and with the appearance of multi-drug resistance (MDR). Here, we propose targeting tubulin with compounds directed to the colchicine site, as their smaller size offer pharmacokinetic advantages and make them less prone to MDR efflux. We have prepared 52 new Microtubule Destabilizing Sulfonamides (MDS) that mostly avoid MDR-mediated resistance and with improved aqueous solubility. The most potent compounds, N-methyl-N-(3,4,5-trimethoxyphenyl-4-methylaminobenzenesulfonamide 38, N-methyl-N-(3,4,5-trimethoxyphenyl-4-methoxy-3-aminobenzenesulfonamide 42, and N-benzyl-N-(3,4,5-trimethoxyphenyl-4-methoxy-3-aminobenzenesulfonamide 45 show nanomolar antiproliferative potencies against ovarian, breast, and cervix carcinoma cells, similar or even better than paclitaxel. Compounds behave as tubulin-binding agents, causing an evident disruption of the microtubule network, in vitro Tubulin Polymerization Inhibition (TPI), and mitotic catastrophe followed by apoptosis. Our results suggest that these novel MDS may be promising alternatives to taxane-based chemotherapy in chemoresistant Pan-Gyn cancers. 相似文献
Many studies have demonstrated the crucial role of vocabulary in predicting reading performance in general. More recent work has indicated that one particular facet of vocabulary (its depth) is more closely related to language comprehension, especially inferential comprehension. On this basis, we developed a training application to specifically improve vocabulary depth. The objective of this study was to test the effectiveness of a mobile application designed to improve vocabulary depth. The effectiveness of this training was examined on 3rd and 4th grade children's vocabulary (breadth and depth), decoding and comprehension performances. A randomized waiting-list control paradigm was used in which an experimental group first received the intervention during the first 4 weeks (between pretest and post-test1), thereafter, a waiting control group received the training for the next 4 weeks (between postest1 and posttest2). Results showed that the developed application led to significant improvements in terms of vocabulary depth performance, as well as a significant transfer effect to reading comprehension. However, we did not observe such a beneficial effect on either vocabulary breadth or written word identification. These results are discussed in terms of the links between vocabulary depth and comprehension, and the opportunities the app presents for remedying language comprehension deficits in children. 相似文献
Software and Systems Modeling - Many model transformation scenarios require flexible execution strategies as they should produce models with the highest possible quality. At the same time,... 相似文献
In recent years, there has been rapid expansion of glycan synthesis, fueled by the recognition that the structural complexity of sugars translates to a myriad of biological functions. Such chemical syntheses involve many challenges, mostly due to the regio- and stereochemical aspects of glycosidic bond formation. One-pot strategies were developed to assist in attaining faster and more economical access to the glycan constructs. In this front, achievements in protecting group manipulation, glycosylation, and combinations of these have been reported. Protecting group manipulations in one pot take advantage of the reaction compatibility of commonly used transformations, many of which occur in high regioselectivity. Sequential glycosylations, on the other hand, rely on leaving group orthogonalities and reactivity tuning, as well as the preactivation technique. Altogether, these approaches offer attractive means to the much needed glycan structures and, consequently, help usher in advances in glycoscience. 相似文献
The diversity of life relies on a handful of chemical elements (carbon, oxygen, hydrogen, nitrogen, sulfur and phosphorus) as part of essential building blocks; some other atoms are needed to a lesser extent, but most of the remaining elements are excluded from biology. This circumstance limits the scope of biochemical reactions in extant metabolism – yet it offers a phenomenal playground for synthetic biology. Xenobiology aims to bring novel bricks to life that could be exploited for (xeno)metabolite synthesis. In particular, the assembly of novel pathways engineered to handle nonbiological elements (neometabolism) will broaden chemical space beyond the reach of natural evolution. In this review, xeno-elements that could be blended into nature's biosynthetic portfolio are discussed together with their physicochemical properties and tools and strategies to incorporate them into biochemistry. We argue that current bioproduction methods can be revolutionized by bridging xenobiology and neometabolism for the synthesis of new-to-nature molecules, such as organohalides. 相似文献
The authors reanalyzed assessment center (AC) multitrait-multimethod (MTMM) matrices containing correlations among postexercise dimension ratings (PEDRs) reported by F. Lievens and J. M. Conway (2001). Unlike F. Lievens and J. M. Conway, who used a correlated dimension-correlated uniqueness model, we used a different set of confirmatory-factor-analysis-based models (1-dimension-correlated Exercise and 1-dimension-correlated uniqueness models) to estimate dimension and exercise variance components in AC PEDRs. Results of reanalyses suggest that, consistent with previous narrative reviews, exercise variance components dominate over dimension variance components after all. Implications for AC construct validity and possible redirections of research on the validity of ACs are discussed. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
This paper discusses the convenience of using two-dimensional (2-D) coding techniques for the compression of electrocardiogram (ECG) signals. These signals present a very clear periodicity that can be exploited by the use of a 2-D time/frequency transform to decorrelate it as much as possible. A brief theoretical approach is given to justify the use of this technique, and a comparison is made between a 2-D and a one-dimensional (1-D) uniform quantization scenarios. The influence of the error as well as the frame size on the estimation of the fundamental period is studied. 相似文献
Crude and refined hazelnut oils from different countries were characterised by major and minor compounds. Fatty acids, triacylglycerides, waxes, sterols, methyl-sterols, terpenic and aliphatic alcohols, tocopherols, tocotrienols and hydrocarbons were identified and quantified by gas chromatography and high-performance liquid chromatography. The levels of these chemical compounds in hazelnut oils together with the equivalent carbon numbers and triacylglyceride carbon numbers, were compared with the results of analyses of samples of other vegetable oils. The statistical procedure of cluster analysis was used to characterise hazelnut oils versus other edible oils. 相似文献