首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 27 毫秒
1.
This study examines the implications of the predicted big data revolution in social sciences for the research using the Triple Helix (TH) model of innovation and knowledge creation in the context of developing and transitional economies. While big data research promises to transform the nature of social inquiry and improve the world economy by increasing the productivity and competitiveness of companies and enhancing the functioning of the public sector, it may also potentially lead to a growing divide in research capabilities between developed and developing economies. More specifically, given the uneven access to digital data and scarcity of computational resources and talent, developing countries are at disadvantage when it comes to employing data-driven, computational methods for studying the TH relations between universities, industries and governments. Scientometric analysis of the TH literature conducted in this study reveals a growing disparity between developed and developing countries in their use of innovative computational research methods. As a potential remedy, the extension of the TH model is proposed to include non-market actors as subjects of study as well as potential providers of computational resources, education and training.  相似文献   

2.
Picture exposure time and frame repetition rate are the two fundamental parameters which define most high-speed systems. The various light sources, continuous and those of short duration are reviewed. Air sparks and xenon filled gas tubes have the advantage of short durations of a few microseconds and can be used as either single flashes or in multiple form.

Considerable diversity of cameras exist. They fall into the two categories of those that form normal and separate Images or those in which the photographic image is “dissected” by the taking mechanism and then re-constructed afterwards by optical means. Examples of both types are given.

Film drum cameras provide a simple and cheaper technique when the event lasts for a reasonably short time. Electro optical shutters are beginning to be widely employed. In aero-dynamic flow research, many of these methods are combined with Schlieren systems.

Applications of the methods have been made to zoological animal motion studies and to the biological sciences for living material under the microscope. The field of application has been widest in the physical and engineering sciences. Military applications in connection with projectile trajectories, impact studies and explosions in air and underwater have been the spur to much research and development of quipment.  相似文献   

3.

Multilevel modeling is often used in the social sciences for analyzing data that has a hierarchical structure, e.g., students nested within schools. In an earlier study, we investigated the performance of various prediction rules for predicting a future observable within a hierarchical data set (Afshartous & de Leeuw, 2004). We apply the multilevel prediction approach to the NELS:88 educational data in order to assess the predictive performance on a real data set; four candidate models are considered and predictions are evaluated via both cross-validation and bootstrapping methods. The goal is to develop model selection criteria that assess the predictive ability of candidate multilevel models. We also introduce two plots that 1) aid in visualizing the amount to which the multilevel model predictions are “shrunk” or translated from the OLS predictions, and 2) help identify if certain groups exist for which the predictions are particularly good or bad.

  相似文献   

4.
The chemical conversion of small molecules such as H2, H2O, O2, N2, CO2, and CH4 to energy and chemicals is critical for a sustainable energy future. However, the high chemical stability of these molecules poses grand challenges to the practical implementation of these processes. In this regard, computational approaches such as density functional theory, microkinetic modeling, data science, and machine learning have guided the rational design of catalysts by elucidating mechanistic insights, identifying active sites, and predicting catalytic activity. Here, the theory and methodologies for heterogeneous catalysis and their applications for small-molecule activation are reviewed. An overview of fundamental theory and key computational methods for designing catalysts, including the emerging data science techniques in particular, is given. Applications of these methods for finding efficient heterogeneous catalysts for the activation of the aforementioned small molecules are then surveyed. Finally, promising directions of the computational catalysis field for further outlooks are discussed, focusing on the challenges and opportunities for new methods.  相似文献   

5.
Abstract:

Engineering management involves an extremely diverse range of topics, problems, and questions, many of which involve both human and non-human elements. An analysis of the methods used in the field of engineering management is important in order to identify trends in the use of methods, to promote the training of future research practitioners, and to build overall knowledge for and about change. This article reviews the studies reported in journals relevant to engineering management and describes and categorizes the data collection and analysis methods, reporting relative frequencies and trends in the use of these methods.  相似文献   

6.
Abstract

We have developed a robust and rapid computational method for processing the raw spectral data collected from thin film optical interference biosensors. We have applied this method to Interference Reflectance Imaging Sensor (IRIS) measurements and observed a 10,000 fold improvement in processing time, unlocking a variety of clinical and scientific applications. Interference biosensors have advantages over similar technologies in certain applications, for example highly multiplexed measurements of molecular kinetics. However, processing raw IRIS data into useful measurements has been prohibitively time consuming for high-throughput studies. Here we describe the implementation of a lookup table (LUT) technique that provides accurate results in far less time than naive methods. We also discuss an additional benefit that the LUT method can be used with a wider range of interference layer thickness and experimental configurations that are incompatible with methods that require fitting the spectral response.  相似文献   

7.
De Mol  Liesbeth 《NTM》2019,27(4):443-478

What is the significance of high-speed computation for the sciences? How far does it result in a practice of simulation which affects the sciences on a very basic level? To offer more historical context to these recurring questions, this paper revisits the roots of computer simulation in the development of the ENIAC computer and the Monte Carlo method.

With the aim of identifying more clearly what really changed (or not) in the history of science in the 1940s and 1950s due to the computer, I will emphasize the continuities with older practices and develop a two-fold argument. Firstly, one can find a diversity of practices around ENIAC which tends to be ignored if one focuses only on the ENIAC itself as the originator of Monte Carlo simulation. Following from this, I claim, secondly, that there was no simulation around ENIAC. Not only is the term ‘simulation’ not used within that context, but the analysis also shows how ‘simulation’ is an effect of three interrelated sets of different practices around the machine: (1) the mathematics which the ENIAC users employed and developed, (2) the programs, (3) the physicality of the machine. I conclude that, in the context discussed, the most important shifts in practice are about rethinking existing computational methods. This was done in view of adapting them to the high-speed and programmability of the new machine. Simulation then is but one facet of this process of adaptation, singled out by posterity to be viewed as its principal aspect.

  相似文献   

8.

While randomized controlled experiments are often considered the gold standard for predicting causal relationships between variables, they are expensive if one is interested in understanding the complete set of causal relationships governing a large set of variables and it may not be possible to manipulate certain variables due to ethical or practical constraints. To address these scenarios, procedures have been developed which use conditional independence relationships among variables when they are passively observed to predict which variables may or may not be causally related to other variables. Until recently, most of these procedures assumed that the data consisted of a single i.i.d. dataset of observations, but in practice researchers often have access to multiple similar datasets, e.g. from multiple labs studying the same problem, which measure slightly different variable sets and where recording conventions and procedures may vary. This paper discusses recent state of the art approaches for predicting causal relationships using multiple observational and experimental datasets in these contexts.

  相似文献   

9.
The area of computational quantum chemistry, which applies the principles of quantum mechanics to molecular and condensed systems, has developed drastically over the last decades, due to both increased computer power and the efficient implementation of quantum chemical methods in readily available computer programs. Because of this, accurate computational techniques can now be applied to much larger systems than before, bringing the area of biochemistry within the scope of electronic-structure quantum chemical methods. The rapid pace of progress of quantum chemistry makes it a very exciting research field; calculations that are too computationally expensive today may be feasible in a few months' time! This article reviews the current application of 'first-principles' quantum chemistry in biochemical and life sciences research, and discusses its future potential. The current capability of first-principles quantum chemistry is illustrated in a brief examination of computational studies on neurotransmitters, helical peptides, and DNA complexes.  相似文献   

10.
Liu  Yan-Li  Yuan  Wen-Juan  Zhu  Shao-Hong 《Scientometrics》2022,127(1):369-383

Research on COVID-19 has proliferated rapidly since the outbreak of the pandemic at the end of 2019. Many articles have aimed to provide insight into this fast-growing theme. The social sciences have also put effort into research on problems related to COVID-19, with numerous documents having been published. Some studies have evaluated the growth of scientific literature on COVID-19 based on scientometric analysis, but most of these analyses focused on medical research while ignoring social science research on COVID-19. This is the first scientometric study of the performance of social science research on COVID-19. It provides insight into the landscape, the research fields, and international collaboration in this domain. Data obtained from SSCI on the Web of Science platform was analyzed using VOSviewer. The overall performance of the documents was described, and then keyword co-occurrence and co-authorship networks were visualized. The six main research fields with highly active topics were confirmed by analysis and visualization. Mental health and psychology were clearly shown to be the focus of most social science research related to COVID-19. The USA made the most contributions, with the most extensive collaborations globally, with Harvard University as the leading institution. Collaborations throughout the world were strongly related to geographical location. Considering the social impact of the COVID-19 pandemic, this scientometric study is significant for identifying the growth of literature in the social sciences and can help researchers within this field gain quantitative insights into the development of research on COVID-19. The results are useful for finding potential collaborators and for identifying the frontier and gaps in social science research on COVID-19 to shape future studies.

  相似文献   

11.

The importance of trend and cross-national studies on general social attitudes has become widely recognized. However, very little published trend and cross-national data or studies on general social attitudes, which persistently use identical survey systems, have been identified. Up to now, despite some methodological limitations, the studies published in recent years have provided an encouraging base for the future of trend and cross-national studies of general social attitudes. Cooperation will be essential for future studies as they apply various improved methodologies not only to existing trend and cross-national data but also to new survey data collection procedures. Like the development of cohort analysis for trend studies of general social attitudes, as well as the development of correspondence analysis for cross-national studies of general social attitudes, combinations of both cohort and correspondence analyses have been proposed.

Finally, it is suggested that for both trend and cross-national studies of general social attitudes, it is better not only to develop the hard-mode approach but also to develop the soft-mode approach by fully utilizing the hard-mode approach. Conducting this type of research in the context of trend and cross-national analyses will enable empirical analyses to verify existing sociological theories as well as contribute to the establishment of new or modified theories.

  相似文献   

12.
Alharbey  Riad  Kim  Jong In  Daud  Ali  Song  Min  Alshdadi  Abdulrahman A.  Hayat  Malik Khizar 《Scientometrics》2022,127(5):2661-2681

Health maintenance is one of the foremost pillars of human society which needs up-to-date solutions to medical problems. The advancement in the biomedical field has intensified the—information load that exists in the form of clinic reports, research papers, or lab tests, etc. Extracting meaningful insights from this corpus is equally important as its progress—to make it valuable for recent medicine. In terms of biomedical text mining, the areas explored include protein–protein interactions, entity-relationship detection, and so on. The biomedical effects of drugs have significance when administered to a living organism. Biomedical literature is not widely explored in terms of gene-drug relations, hence needs investigation. Indexing methods can be used for ranking gene-drug relations. In scientific literature, Hirsch’s the h-index is usually used to quantify the impact of an individual author. Likewise, in this research, we propose the Drug-Index, a quantifiable measure that can be used to detect gene-drug relations. It is useful in drug discovery, diagnosing, personalized treatment using suitable drugs for relevant genes. For a strong and reliable gene-drug relationship discovery, drugs are extracted from a subset of MEDLINE—a bibliographic medical database. The detected drugs are verified from the PharmacoGenomics KnowledgeBase (PharmGKB)—a publicly available medical knowledgebase by Stanford University.

  相似文献   

13.
14.
Abstract

This article provides historical narratives describing approaches to studying, managing, and quantitatively valuing research; methods used by industry, particularly the pharmaceutical industry and approaches taken by economists, including government economists using the Leontief Input–Output framework. The article documents the persistent belief that research expenditures generate future economic value along with the equally persistent frustrations of attempting to measure such value, particularly for basic research. The article then discusses the results of applying the Leontief method to Association of University Technology Manager (AUTM) data. Strengths and weaknesses of these approaches are noted. Additional studies and calls for data capture are suggested and the potential benefits of such efforts are described.  相似文献   

15.
Background: When designing pharmaceutical products, the relationships between causal factors and pharmaceutical responses are intricate. A Bayesian network (BN) was used to clarify the latent structure underlying the causal factors and pharmaceutical responses of a tablet containing solid dispersion (SD) of indomethacin (IMC).

Method: IMC, a poorly water-soluble drug, was tested with polyvinylpyrrolidone as the carrier polymer. Tablets containing a SD or a physical mixture of IMC, different quantities of magnesium stearate, microcrystalline cellulose, and low-substituted hydroxypropyl cellulose, and subjected to different compression force were selected as the causal factors. The pharmaceutical responses were the dissolution properties and tensile strength before and after the accelerated test and a similarity factor, which was used as an index of the storage stability.

Result: BN models were constructed based on three measurement criteria for the appropriateness of the graph structure. Of these, the BN model based on Akaike’s information criterion was similar to the results for the analysis of variance. To quantitatively estimate the causal relationships underlying the latent structure in this system, conditional probability distributions were inferred from the BN model. The responses were accurately predicted using the BN model, as reflected in the high correlation coefficients in a leave-one-out cross-validation procedure.

Conclusion: The BN technique provides a better understanding of the latent structure underlying causal factors and responses.  相似文献   

16.
Abstract

In this review, we provide an overview of the development of quantitative structure–property relationships incorporating the impact of data uncertainty from small, limited knowledge data sets from which we rapidly develop new and larger databases. Unlike traditional database development, this informatics based approach is concurrent with the identification and discovery of the key metrics controlling structure–property relationships; and even more importantly we are now in a position to build materials databases based on design ‘intent’ and not just design parameters. This permits for example to establish materials databases that can be used for targeted multifunctional properties and not just one characteristic at a time as is presently done. This review provides a summary of the computational logic of building such virtual databases and gives some examples in the field of complex inorganic solids for scintillator applications.  相似文献   

17.
ABSTRACT

Tracer studies of fluid catalyst flow have been conducted in five different fluid catalytic cracking units containing from 200 to 1000 tons of circulating catalyst. A single pulse injection of about 2 pounds of catalyst labeled with Au198 or Sc46 can yield the following Information:

1. Catalyst circulation rate.

2. Mean catalyst residence time in specific parts of the circuit.

3. Catalyst Inventory in the corresponding vessels.

4. Catalyst residence time distribution in these vessels.

5. Total unit inventory by tracer dilution.

6. Attrition rate and average lifetime of the labeled catalyst.

Experimental methods and data analysis are described, and examples are given. The effects of residence time distribution on reaction kinetics are discussed.  相似文献   

18.
Abstract

Machine identification of discrete event systems (DES) addresses the issue of identifying an unknown system based on an externally observed sample path from the unknown system. Online Modeling Refinement studies the continuing machine identification process when the observed sample path is updated incrementally. The notion of information embedded in a sample path is defined. By taking advantage of the structural similarity between successive observed sample paths, the computational requirement for the proposed online modeling refinement algorithm is kept minimal. An example is provided to show how identification results converge as the incrementally observed sequence is accumulated over time.  相似文献   

19.
We identify a unique viewpoint on the collective behaviour of intelligent agents. We first develop a highly general abstract model for the possible future lives these agents may encounter as a result of their decisions. In the context of these possibilities, we show that the causal entropic principle, whereby agents follow behavioural rules that maximize their entropy over all paths through the future, predicts many of the observed features of social interactions among both human and animal groups. Our results indicate that agents are often able to maximize their future path entropy by remaining cohesive as a group and that this cohesion leads to collectively intelligent outcomes that depend strongly on the distribution of the number of possible future paths. We derive social interaction rules that are consistent with maximum entropy group behaviour for both discrete and continuous decision spaces. Our analysis further predicts that social interactions are likely to be fundamentally based on Weber''s law of response to proportional stimuli, supporting many studies that find a neurological basis for this stimulus–response mechanism and providing a novel basis for the common assumption of linearly additive ‘social forces’ in simulation studies of collective behaviour.  相似文献   

20.

Consider a case where cause-effect relationships between variables can be described as a causal diagram and the corresponding Gaussian linear structural equation model. In order to identify total effects in studies with an unobserved response variable, this paper proposes graphical criteria for selecting both covariates and variables caused by the response variable. The results enable us not only to judge from the graph structure whether a total effect can be expressed through the observed covariances, but also to provide its closed-form expression in case where its answer is affirmative. The graphical criteria of this paper are helpful to infer total effects when it is difficult to observe a response variable.

  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号