首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   6篇
  免费   0篇
化学工业   1篇
自动化技术   5篇
  2022年   1篇
  2019年   3篇
  2012年   1篇
  2010年   1篇
排序方式: 共有6条查询结果,搜索用时 0 毫秒
1
1.
The electrochemical behavior of mangiferin (MGN), a natural antioxidant compound, is examined using cyclic and differential pulse voltammetry in a protic medium on a glassy carbon electrode. The voltammograms exhibit a single irreversible pH-dependent anodic wave with current controlled by adsorption.Complexes of MGN with β-cyclodextrin (β-CD) were prepared and their formation was confirmed by UV-vis spectroscopy and electrochemical experiments, using a self-assembled monolayer of cyclodextrin on a gold electrode. The association constant of MGN:β-CD complexes was estimated by the Benesi-Hildebrand method, based on the spectrophotometric quantification of free β-CD and by the direct method using cyclic voltammetry and the Langmuir isotherm.PM IRRAS experiments corroborated the inclusion process based on the observation of the corresponding peaks in the spectra of the samples.MGN was quantified using a simple electrochemical method based on a β-CD incorporated carbon nanotube (CNT)-modified electrode (β-CDCNT). The presence of β-CD led to a 10-fold lower detection limit than that obtained with a CNT-modified electrode.  相似文献   
2.
Software product line engineering is about producing a set of similar products in a certain domain. A variability model documents the variability amongst products in a product line. The specification of variability can be extended with quality information, such as measurable quality attributes (e.g., CPU and memory consumption) and constraints on these attributes (e.g., memory consumption should be in a range of values). However, the wrong use of constraints may cause anomalies in the specification which must be detected (e.g., the model could represent no products). Furthermore, based on such quality information, it is possible to carry out quality-aware analyses, i.e., the product line engineer may want to verify whether it is possible to build a product that satisfies a desired quality. The challenge for quality-aware specification and analysis is threefold. First, there should be a way to specify quality information in variability models. Second, it should be possible to detect anomalies in the variability specification associated with quality information. Third, there should be mechanisms to verify the variability model to extract useful information, such as the possibility to build a product that fulfils certain quality conditions (e.g., is there any product that requires less than 512?MB of memory?). In this article, we present an approach for quality-aware analysis in software product lines using the orthogonal variability model (OVM) to represent variability. We propose to map variability represented in the OVM associated with quality information to a constraint satisfaction problem and to use an off-the-shelf constraint programming solver to automatically perform the verification task. To illustrate our approach, we use a product line in the automotive domain which is an example that was created in a national project by a leading car company. We have developed a prototype tool named FaMa-OVM, which works as a proof of concepts. We were able to identify void models, dead and false optional elements, and check whether the product line example satisfies quality conditions.  相似文献   
3.
Companies are taking advantage of cloud computing to upgrade their business processes. Cloud computing requires interaction with many kinds of applications, so it is necessary to improve the performance of software tools that allow keeping information on all these applications consistent and synchronised. Integration platforms are specialised software tools that provide support to design, implement, run, and monitor integration solutions, which aim to orchestrate a set of applications so as to promote compatibility among their data or to develop new functionality on top of the current ones. The run-time system is the part of the integration platform responsible for running the integration solutions, which makes its performance the uttermost important issue. The contribution of this article is two-fold: a framework and an evaluation of integration platforms. The former is a framework composed of ten properties grouped into two dimensions to evaluate the run-time systems focusing on performance. Using this framework as reference, the second contribution is an evaluation of nine open-source integration platforms, which represent the state-of-the-art, provide support to the integration patterns, and follow the pipes-and-filters architectural style. In addition, as a result of this work, we suggest open research directions that can be explored to improve the performance of the run-time systems and at the same time may be useful to adapt them to the context of cloud computing.  相似文献   
4.
Integration frameworks are specialized software tools built and adapted to facilitate the design and implementation of integration solutions. An integration solution allows for the reuse of applications from the software ecosystem of companies to support their business processes. There are several open-source integration frameworks available on the market designed to operate in a business context to manipulate structured data; however, increasingly, they are required to deal with unstructured and large volumes of data, thus requiring effort to adapt these frameworks to work with unstructured and large volume of data. Choosing the framework, which is the easiest to be adapted, is not a trivial task. In this article, we review the newest stable versions of four open-source integration frameworks by analyzing how they have evolved regarding their adaptive maintainability over five years. We rank them according to their maintainability degree and compare past and current versions of each framework. To encourage and enable researchers and developers to replicate our experiments, with the aim of verifying our findings, and to experiment with new versions of the integration frameworks analyzed, we detail the experimental protocol used while also having made all the required software involved available on the Web.  相似文献   
5.

Growing demand for reduced local hardware infrastructure is driving the adoption of Cloud Computing. In the Infrastructure-as-a-Service model, service providers offer virtualized computational resources in the form of virtual machine instances. The existence of a large variety of providers and instances makes the decision-making process a difficult task for users, especially as factors such as the datacenter location - where the virtual machine is hosted - have a direct influence on the price of instances. The same instance may present price differences when hosted in different geographically distributed datacenters and, because of that, the datacenter location needs to be taken into account through the decision-making process. Given this problem, we propose the D-AHP, a methodology to aid decision-making based on Pareto Dominance and Analytic Hierarchy Process (AHP). In the D-AHP, the dominance concept is applied to reduce the number of instances to be compared; the instances selection is based on a set of objectives, while AHP ranks the selected ones from a set of criteria and sub-criteria, among them the datacenter location. The results from case studies show that differences may arise in the results, regarding which instance is more suitable for the user, when considering the datacenter location as a criterion to choose an instance. This fact highlights the need to consider this factor during the process of migrating applications to the Cloud. In addition, Pareto Dominance applied early over the set of total instances has proved to be efficient, once it significantly reduces the number of instances to be compared and ordered by the AHP by excluding instances with less computational resources and higher cost in the decision-making process, mainly for larger application workloads.

  相似文献   
6.
Enterprises turn to their software applications to support their business processes. Over time, it is common for a company to end up with a wide range of applications, which are usually developed in-house by its information technology department or purchased from third-party specialized software companies. The result is a heterogeneous software ecosystem with applications developed in different technologies and frequently using different data models, which brings challenges when two or more applications have to collaborate to support a business process. Integration platforms are specialized software tools that help design, implement, run, and monitor integration solutions that orchestrate a set of applications. The run-time system is the component of integration platforms responsible for running integration solutions, which makes its performance a critically important issue. In this paper, we report our experience in evaluating and comparing four well-known open-source integration platforms in the context of a research project where performance was a central requirement to choose an integration platform. The evaluation was conducted using a decision-making methodology to build a ranking of candidate platforms by means of subjective and objective criteria. The subjective evaluation takes into account expert preferences and compares integration platforms using the analytic hierarchy process, which has been used in many applications related with decision-making. The objective evaluation is build on top of properties distributed on three dimensions, namely, message processing, hotspot detection, and fairness execution, which compose the research methodology we used. The evaluated platforms were ranked to identify the one with the best performance.  相似文献   
1
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号