首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Accurate and reliable modelling of protein–protein interaction networks for complex diseases such as colorectal cancer can help better understand mechanism of diseases and potentially discover new drugs. Different machine learning methods such as empirical mode decomposition combined with least square support vector machine, and discrete Fourier transform have been widely utilised as a classifier and for automatic discovery of biomarkers for the diagnosis of the disease. The existing methods are, however, less efficient as they tend to ignore interaction with the classifier. In this study, the authors propose a two‐stage optimisation approach to effectively select biomarkers and discover interactions among them. At the first stage, particle swarm optimisation (PSO) and differential evolution (DE) are used to optimise parameters of support vector machine recursive feature elimination algorithm, and dynamic Bayesian network is then used to predict temporal relationship between biomarkers across two time points. Results show that 18 and 25 biomarkers selected by PSO and DE‐based approach, respectively, yields the same accuracy of 97.3% and F1‐score of 97.7 and 97.6%, respectively. The stratified analysis reveals that Alpha‐2‐HS‐glycoprotein was a dominant hub gene with multiple interactions to other genes including Fibrinogen alpha chain, which is also a potential biomarker for colorectal cancer.Inspec keywords: cancer, proteins, particle swarm optimisation, evolutionary computation, support vector machines, recursive functions, Bayes methods, genetics, molecular biophysics, medical computingOther keywords: colorectal cancer metastasis, two‐stage optimisation approach, protein–protein interaction networks, biomarkers, particle swarm optimisation, differential evolution, support vector machine recursive feature elimination, dynamic Bayesian network, stratified analysis, Alpha‐2‐HS‐glycoprotein, hub gene, Fibrinogen alpha chain  相似文献   

2.
Here, a two‐phase search strategy is proposed to identify the biomarkers in gene expression data set for the prostate cancer diagnosis. A statistical filtering method is initially employed to remove the noisiest data. In the first phase of the search strategy, a multi‐objective optimisation based on the binary particle swarm optimisation algorithm tuned by a chaotic method is proposed to select the optimal subset of genes with the minimum number of genes and the maximum classification accuracy. Finally, in the second phase of the search strategy, the cache‐based modification of the sequential forward floating selection algorithm is used to find the most discriminant genes from the optimal subset of genes selected in the first phase. The results of applying the proposed algorithm on the available challenging prostate cancer data set demonstrate that the proposed algorithm can perfectly identify the informative genes such that the classification accuracy, sensitivity, and specificity of 100% are achieved with only nine biomarkers.Inspec keywords: cancer, biological organs, optimisation, feature extraction, search problems, particle swarm optimisation, pattern classification, geneticsOther keywords: biomarkers, gene expression feature selection, prostate cancer diagnosis, heuristic–deterministic search strategy, two‐phase search strategy, gene expression data, statistical filtering method, noisiest data, multiobjective optimisation, particle swarm optimisation algorithm, chaotic method, selection algorithm, discriminant genes, available challenging prostate cancer data, informative genes  相似文献   

3.
4.
This study proposes an umbrella deployment of swarm intelligence algorithm, such as stochastic diffusion search for medical imaging applications. After summarising the results of some previous works which shows how the algorithm assists in the identification of metastasis in bone scans and microcalcifications on mammographs, for the first time, the use of the algorithm in assessing the CT images of the aorta is demonstrated along with its performance in detecting the nasogastric tube in chest X‐ray. The swarm intelligence algorithm presented in this study is adapted to address these particular tasks and its functionality is investigated by running the swarms on sample CT images and X‐rays whose status have been determined by senior radiologists. In addition, a hybrid swarm intelligence‐learning vector quantisation (LVQ) approach is proposed in the context of magnetic resonance (MR) brain image segmentation. The particle swarm optimisation is used to train the LVQ which eliminates the iteration‐dependent nature of LVQ. The proposed methodology is used to detect the tumour regions in the abnormal MR brain images.Inspec keywords: swarm intelligence, image segmentation, brain, neurophysiology, medical image processing, biomedical MRI, computerised tomography, diagnostic radiography, bone, diseases, learning (artificial intelligence), particle swarm optimisation, iterative methods, tumours, medical disordersOther keywords: medical imaging identifying metastasis, microcalcifications, umbrella deployment, stochastic diffusion, metastasis identification, bone scans, mammographs, CT imaging, aorta, nasogastric tube, chest X‐ray, hybrid swarm intelligence‐learning vector quantisation approach, magnetic resonance brain image segmentation, particle swarm optimisation, iteration‐dependent nature, tumour regions, abnormal MR brain imaging  相似文献   

5.
In this study, a closed‐loop control scheme is proposed for the glucose–insulin regulatory system in type‐1 diabetic mellitus (T1DM) patients. Some innovative hybrid glucose–insulin regulators have combined artificial intelligence such as fuzzy logic and genetic algorithm with well known Palumbo model to regulate the blood glucose (BG) level in T1DM patients. However, most of these approaches have focused on the glucose reference tracking, and the qualitative of this tracking such as chattering reduction of insulin injection has not been well‐studied. Higher‐order sliding mode (HoSM) controllers have been employed to attenuate the effect of chattering. Owing to the delayed nature and non‐linear property of glucose–insulin mechanism as well as various unmeasurable disturbances, even the HoSM methods are partly successful. In this study, data fusion of adaptive neuro‐fuzzy inference systems optimised by particle swarm optimisation has been presented. The excellent performance of the proposed hybrid controller, i.e. desired BG‐level tracking and chattering reduction in the presence of daily glucose‐level disturbances is verified.Inspec keywords: fuzzy control, variable structure systems, particle swarm optimisation, neurocontrollers, fuzzy neural nets, blood, genetic algorithms, closed loop systems, medical control systems, fuzzy reasoning, diseases, nonlinear control systems, sugarOther keywords: data fusion, adaptive neuro‐fuzzy inference systems, particle swarm optimisation, hybrid controller, desired BG‐level tracking, chattering reduction, daily glucose‐level disturbances, closed‐loop control scheme, glucose–insulin regulatory system, type‐1 diabetic mellitus patients, innovative hybrid glucose–insulin regulators, artificial intelligence, fuzzy logic, genetic algorithm, Palumbo model, blood glucose level, T1DM patients, glucose reference tracking, insulin injection, mode controllers, glucose–insulin mechanism, chattering‐free hybrid adaptive neuro‐fuzzy inference system, particle swarm optimisation data fusion‐based BG‐level control  相似文献   

6.
7.
Bone loss in osteoporosis, commonly observed in postmenopausal women and the elderly, is caused by an imbalance in activities of bone‐forming osteoblasts and bone‐resorbing osteoclasts. To treat osteoporosis and increase bone mineral density (BMD), physical activities and drugs are often recommended. Complex systems dynamics prevent an intuitive prediction of treatment strategies, and little is known about an optimal sequence for the combinatorial use of available treatments. In this study, the authors built a mathematical model of bone remodelling and developed a treatment strategy for mechanical loading and salubrinal, a synthetic chemical agent that enhances bone formation and prevents bone resorption. The model formulated a temporal BMD change of a mouse''s whole skeleton in response to ovariectomy, mechanical loading and administration of salubrinal. Particle swarm optimisation was employed to maximise a performance index (a function of BMD and treatment cost) to find an ideal sequence of treatment. The best treatment was found to start with mechanical loading followed by salubrinal. As treatment costs increased, the sequence started with no treatment and usage of salubrinal became scarce. The treatment strategy will depend on individual needs and costs, and the proposed model is expected to contribute to the development of personalised treatment strategies.Inspec keywords: bone, diseases, minerals, particle swarm optimisation, patient treatment, physiological modelsOther keywords: ovariectomy, mouse whole skeleton, bone mineral density, bone resorption, bone formation, synthetic chemical agent, salubrinal, mechanical loading, bone remodelling mathematical model, particle swarm optimisation, osteoporosis treatment  相似文献   

8.
Identifying drug–target interactions has been a key step for drug repositioning, drug discovery and drug design. Since it is expensive to determine the interactions experimentally, computational methods are needed for predicting interactions. In this work, the authors first propose a single‐view penalised graph (SPGraph) clustering approach to integrate drug structure and protein sequence data in a structural view. The SPGraph model does clustering on drugs and targets simultaneously such that the known drug–target interactions are best preserved in the clustering results. They then apply the SPGraph to a chemical view with drug response data and gene expression data in NCI‐60 cell lines. They further generalise the SPGraph to a multi‐view penalised graph (MPGraph) version, which can integrate the structural view and chemical view of the data. In the authors'' experiments, they compare their approach with some comparison partners, and the results show that the SPGraph could improve the prediction accuracy in a small scale, and the MPGraph can achieve around 10% improvements for the prediction accuracy. They finally give some new targets for 22 Food and Drug Administration approved drugs for drug repositioning, and some can be supported by other references.Inspec keywords: graphs, drug delivery systems, drugs, proteins, molecular biophysics, molecular configurations, optimisation, eigenvalues and eigenfunctions, Laplace equations, cancer, cellular biophysics, gene therapy, medical computingOther keywords: MPGraph, multiview penalised graph clustering, drug‐target interactions, drug repositioning, drug discovery, drug design, computational methods, single‐view penalized graph clustering approach, drug structure, protein sequence data, SPGraph model, optimisation problem, spectral clustering, eigenvalue decomposition, Laplacian model, gene expression data, NCI‐60 cell lines  相似文献   

9.
Prediction of cardiovascular disease (CVD) is a critical challenge in the area of clinical data analysis. In this study, an efficient heart disease prediction is developed based on optimal feature selection. Initially, the data pre‐processing process is performed using data cleaning, data transformation, missing values imputation, and data normalisation. Then the decision function‐based chaotic salp swarm (DFCSS) algorithm is used to select the optimal features in the feature selection process. Then the chosen attributes are given to the improved Elman neural network (IENN) for data classification. Here, the sailfish optimisation (SFO) algorithm is used to compute the optimal weight value of IENN. The combination of DFCSS–IENN‐based SFO (IESFO) algorithm effectively predicts heart disease. The proposed (DFCSS–IESFO) approach is implemented in the Python environment using two different datasets such as the University of California Irvine (UCI) Cleveland heart disease dataset and CVD dataset. The simulation results proved that the proposed scheme achieved a high‐classification accuracy of 98.7% for the CVD dataset and 98% for the UCI dataset compared to other classifiers, such as support vector machine, K‐nearest neighbour, Elman neural network, Gaussian Naive Bayes, logistic regression, random forest, and decision tree.Inspec keywords: cardiovascular system, medical diagnostic computing, feature extraction, regression analysis, data mining, learning (artificial intelligence), Bayes methods, neural nets, support vector machines, diseases, pattern classification, data handling, decision trees, cardiology, data analysis, feature selectionOther keywords: efficient heart disease prediction‐based, optimal feature selection, improved Elman‐SFO, cardiovascular disease, clinical data analysis, data pre‐processing process, data cleaning, data transformation, values imputation, data normalisation, decision function‐based chaotic salp swarm algorithm, optimal features, feature selection process, improved Elman neural network, data classification, sailfish optimisation algorithm, optimal weight value, DFCSS–IENN‐based SFO algorithm, DFCSS–IESFO, California Irvine Cleveland heart disease dataset, CVD dataset, high‐classification accuracy  相似文献   

10.
Computational methods play an important role in the disease genes prioritisation by integrating many kinds of data sources such as gene expression, functional annotations and protein–protein interactions. However, the existing methods usually perform well in predicting highly linked genes, whereas they work quite poorly for loosely linked genes. Motivated by this observation, a degree‐adjusted strategy is applied to improve the algorithm that was proposed earlier for the prediction of disease genes from gene expression and protein interactions. The authors also showed that the modified method is good at identifying loosely linked disease genes and the overall performance gets enhanced accordingly. This study suggests the importance of statistically adjusting the degree distribution bias in the background network for network‐based modelling of complex diseases.Inspec keywords: biochemistry, bioinformatics, diseases, genetics, genomics, medical computing, physiological models, proteins, statistical analysis, proteomicsOther keywords: degree‐adjusted algorithm, candidate disease genes prioritisation, gene expression, protein interactome, computational method, functional annotation, protein–protein interaction, highly linked genes prediction, disease genes prediction, loosely linked disease genes identification, degree distribution bias statistical adjustment, complex disease network‐based modelling  相似文献   

11.
Microarray technology plays a significant role in cancer classification, where a large number of genes and samples are simultaneously analysed. For the efficient analysis of the microarray data, there is a great demand for the development of intelligent techniques. In this article, the authors propose a novel hybrid technique employing Fisher criterion, ReliefF, and extreme learning machine (ELM) based on the principle of chaotic emperor penguin optimisation algorithm (CEPO). EPO is a recently developed metaheuristic method. In the proposed method, initially, Fisher score and ReliefF are independently used as filters for relevant gene selection. Further, a novel population‐based metaheuristic, namely, CEPO was proposed to pre‐train the ELM by selecting the optimal input weights and hidden biases. The authors have successfully conducted experiments on seven well‐known data sets. To evaluate the effectiveness, the proposed method is compared with original EPO, genetic algorithm, and particle swarm optimisation‐based ELM along with other state‐of‐the‐art techniques. The experimental results show that the proposed framework achieves better accuracy as compared to the state‐of‐the‐art schemes. The efficacy of the proposed method is demonstrated in terms of accuracy, sensitivity, specificity, and F‐measure.Inspec keywords: genetic algorithms, pattern classification, biology computing, cancer, learning (artificial intelligence), search problems, particle swarm optimisationOther keywords: optimal input weights, data sets, original EPO, genetic algorithm, particle swarm optimisation‐based ELM, microarray cancer classification, microarray technology, microarray data, intelligent techniques, Fisher criterion, ReliefF, chaotic emperor penguin optimisation algorithm, CEPO, recently developed metaheuristic method, Fisher score, relevant gene selection, population‐based, chaotic penguin optimised extreme learning machine, F  相似文献   

12.
In this study, ant colony optimisation (ACO) algorithm is used to derive near‐optimal interactions between a number of single nucleotide polymorphisms (SNPs). This approach is used to discover small numbers of SNPs that are combined into a decision tree or contingency table model. The ACO algorithm is shown to be very robust as it is proven to be able to find results that are discriminatory from a statistical perspective with logical interactions, decision tree and contingency table models for various numbers of SNPs considered in the interaction. A large number of the SNPs discovered here have been already identified in large genome‐wide association studies to be related to type II diabetes in the literature, lending additional confidence to the results.Inspec keywords: genetics, genomics, DNA, polymorphism, molecular biophysics, molecular configurations, ant colony optimisation, decision trees, bioinformatics, diseasesOther keywords: ant colony optimisation, decision tree, contingency table models, gene‐gene interactions, ACO algorithm, near‐optimal interactions, single nucleotide polymorphisms, SNP, genome‐wide association studies, type II diabetes  相似文献   

13.
The authors demonstrated an optimal stochastic control algorithm to obtain desirable cancer treatment based on the Gompertz model. Two external forces as two time‐dependent functions are presented to manipulate the growth and death rates in the drift term of the Gompertz model. These input signals represent the effect of external treatment agents to decrease tumour growth rate and increase tumour death rate, respectively. Entropy and variance of cancerous cells are simultaneously controlled based on the Gompertz model. They have introduced a constrained optimisation problem whose cost function is the variance of a cancerous cells population. The defined entropy is based on the probability density function of affected cells was used as a constraint for the cost function. Analysing growth and death rates of cancerous cells, it is found that the logarithmic control signal reduces the growth rate, while the hyperbolic tangent–like control function increases the death rate of tumour growth. The two optimal control signals were calculated by converting the constrained optimisation problem into an unconstrained optimisation problem and by using the real–coded genetic algorithm. Mathematical justifications are implemented to elucidate the existence and uniqueness of the solution for the optimal control problem.Inspec keywords: optimal control, genetic algorithms, cancer, Fokker‐Planck equation, cellular biophysics, stochastic systems, probability, tumours, entropy, medical control systemsOther keywords: cancer treatment, Gompertz model, time‐dependent functions, process input signals, external treatment agents, tumour growth rate, constrained optimisation problem, cost function, cancerous cells population, probability density function, logarithmic control signal, Fokker‐Planck equation, tumour growth process, optimal control signals, optimal control problem, optimal minimum variance‐entropy control, optimal stochastic control algorithm, tumour death rates, hyperbolic tangent‐like control function, unconstrained optimisation problem, real‐coded genetic algorithm  相似文献   

14.
In computational systems biology, the general aim is to derive regulatory models from multivariate readouts, thereby generating predictions for novel experiments. In the past, many such models have been formulated for different biological applications. The authors consider the scenario where a given model fails to predict a set of observations with acceptable accuracy and ask the question whether this is because of the model lacking important external regulations. Real‐world examples for such entities range from microRNAs to metabolic fluxes. To improve the prediction, they propose an algorithm to systematically extend the network by an additional latent dynamic variable which has an exogenous effect on the considered network. This variable''s time course and influence on the other species is estimated in a two‐step procedure involving spline approximation, maximum‐likelihood estimation and model selection. Simulation studies show that such a hidden influence can successfully be inferred. The method is also applied to a signalling pathway model where they analyse real data and obtain promising results. Furthermore, the technique can be employed to detect incomplete network structures.Inspec keywords: biology computing, RNA, splines (mathematics), maximum likelihood estimation, approximation theory, biochemistryOther keywords: latent dynamic components, biological systems, computational system biology, regulatory models, multivariate readouts, biological applications, external regulations, real‐world examples, microRNA, metabolic fluxes, latent dynamic variables, variable time course, two‐step procedure, spline approximation, maximum‐likelihood estimation, model selection, signalling pathway model, real data, incomplete network structures  相似文献   

15.
Parameterisation of kinetic models plays a central role in computational systems biology. Besides the lack of experimental data of high enough quality, some of the biggest challenges here are identification issues. Model parameters can be structurally non‐identifiable because of functional relationships. Noise in measured data is usually considered to be a nuisance for parameter estimation. However, it turns out that intrinsic fluctuations in particle numbers can make parameters identifiable that were previously non‐identifiable. The authors present a method to identify model parameters that are structurally non‐identifiable in a deterministic framework. The method takes time course recordings of biochemical systems in steady state or transient state as input. Often a functional relationship between parameters presents itself by a one‐dimensional manifold in parameter space containing parameter sets of optimal goodness. Although the system''s behaviour cannot be distinguished on this manifold in a deterministic framework it might be distinguishable in a stochastic modelling framework. Their method exploits this by using an objective function that includes a measure for fluctuations in particle numbers. They show on three example models, immigration‐death, gene expression and Epo‐EpoReceptor interaction, that this resolves the non‐identifiability even in the case of measurement noise with known amplitude. The method is applied to partially observed recordings of biochemical systems with measurement noise. It is simple to implement and it is usually very fast to compute. This optimisation can be realised in a classical or Bayesian fashion.Inspec keywords: biochemistry, physiological models, stochastic processes, measurement errors, fluctuations, parameter estimationOther keywords: model parameter identification, deterministic framework, biochemical system, steady state, transient state, stochastic modelling framework, objective function, immigration‐death model, gene expression, Epo–EpoReceptor interaction, stochastic fluctuations, measurement noise  相似文献   

16.
The effect of meal on blood glucose concentration is a key issue in diabetes mellitus because its estimation could be very useful in therapy decisions. In the case of type 1 diabetes mellitus (T1DM), the therapy based on automatic insulin delivery requires a closed‐loop control system to maintain euglycaemia even in the postprandial state. Thus, the mathematical modelling of glucose metabolism is relevant to predict the metabolic state of a patient. Moreover, the eating habits are characteristic of each person, so it is of interest that the mathematical models of meal intake allow to personalise the glycaemic state of the patient using therapy historical data, that is, daily measurements of glucose and records of carbohydrate intake and insulin supply. Thus, here, a model of glucose metabolism that includes the effects of meal is analysed in order to establish criteria for data‐based personalisation. The analysis includes the sensitivity and identifiability of the parameters, and the parameter estimation problem was resolved via two algorithms: particle swarm optimisation and evonorm. The results show that the mathematical model can be a useful tool to estimate the glycaemic status of a patient and personalise it according to her/his historical data.Inspec keywords: medical control systems, closed loop systems, particle swarm optimisation, parameter estimation, biochemistry, diseases, patient monitoring, patient diagnosis, blood, sugar, patient treatment, medical computingOther keywords: meal intake, metabolic state, mathematical modelling, postprandial state, closed‐loop control system, automatic insulin delivery, T1DM, type 1 diabetes mellitus, therapy decisions, blood glucose concentration, TIDM patients, meal glucose–insulin model, mathematical model, parameter estimation problem, data‐based personalisation, glucose metabolism, insulin supply, carbohydrate intake, glucose records, therapy historical data, glycaemic state  相似文献   

17.
This study presents a multi‐scale approach for simulating time‐delay biochemical reaction systems when there are wide ranges of molecular numbers. The authors construct a new efficient approach based on partitioning into slow and fast subsets in conjunction with predictor–corrector methods. This multi‐scale approach is shown to be much more efficient than existing methods such as the delay stochastic simulation algorithm and the modified next reaction method. Numerical testing on several important problems in systems biology confirms the accuracy and computational efficiency of this approach.Inspec keywords: biochemistry, delays, biological techniques, predictor‐corrector methodsOther keywords: multiscale approach, time‐delay biochemical reaction systems, predictor–corrector methods, delay stochastic simulation algorithm, modified next reaction method, numerical testing, systems biology, method accuracy, computational efficiency  相似文献   

18.
Mathematical models are important tools to study the excluded volume effects on reaction–diffusion systems, which are known to play an important role inside living cells. Detailed microscopic simulations with off‐lattice Brownian dynamics become computationally expensive in crowded environments. In this study, the authors therefore investigate to which extent on‐lattice approximations, the so‐called cellular automata models, can be used to simulate reactions and diffusion in the presence of crowding molecules. They show that the diffusion is most severely slowed down in the off‐lattice model, since randomly distributed obstacles effectively exclude more volume than those ordered on an artificial grid. Crowded reaction rates can be both increased and decreased by the grid structure and it proves important to model the molecules with realistic sizes when excluded volume is taken into account. The grid artefacts increase with increasing crowder density and they conclude that the computationally more efficient on‐lattice simulations are accurate approximations only for low crowder densities.Inspec keywords: reaction‐diffusion systems, cellular biophysics, biodiffusion, Brownian motion, cellular automata, molecular biophysics, molecular configurationsOther keywords: crowder density, grid artefacts, grid structure, crowded reaction rates, artificial grid, randomly distributed obstacles, crowding molecules, cellular automata models, on‐lattice approximations, crowded environments, off‐lattice Brownian dynamics, detailed microscopic simulations, living cells, mathematical models, off‐lattice reaction‐diffusion models, on‐lattice reaction‐diffusion models, excluded volume effects  相似文献   

19.
Network alignment is an important bridge to understanding human protein–protein interactions (PPIs) and functions through model organisms. However, the underlying subgraph isomorphism problem complicates and increases the time required to align protein interaction networks (PINs). Parallel computing technology is an effective solution to the challenge of aligning large‐scale networks via sequential computing. In this study, the typical Hungarian‐Greedy Algorithm (HGA) is used as an example for PIN alignment. The authors propose a HGA with 2‐nearest neighbours (HGA‐2N) and implement its graphics processing unit (GPU) acceleration. Numerical experiments demonstrate that HGA‐2N can find alignments that are close to those found by HGA while dramatically reducing computing time. The GPU implementation of HGA‐2N optimises the parallel pattern, computing mode and storage mode and it improves the computing time ratio between the CPU and GPU compared with HGA when large‐scale networks are considered. By using HGA‐2N in GPUs, conserved PPIs can be observed, and potential PPIs can be predicted. Among the predictions based on 25 common Gene Ontology terms, 42.8% can be found in the Human Protein Reference Database. Furthermore, a new method of reconstructing phylogenetic trees is introduced, which shows the same relationships among five herpes viruses that are obtained using other methods.Inspec keywords: graphics processing units, proteins, molecular biophysics, genetics, microorganisms, medical computing, bioinformaticsOther keywords: graphics processing unit‐based alignment, protein interaction networks, network alignment, human protein–protein interactions, Hungarian‐Greedy algorithm, GPU acceleration, gene ontology terms, phylogenetic trees reconstruction, herpes viruses  相似文献   

20.
Present investigation aimed to prepare, optimise, and characterise lipid nanocapsules (LNCs) for improving the solubility and bioavailability of efavirenz (EFV). EFV‐loaded LNCs were prepared by the phase‐inversion temperature method and the influence of various formulation variables was assessed using Box–Behnken design. The prepared formulations were characterised for particle size, polydispersity index (PdI), zeta potential, encapsulation efficiency (EE), and release efficiency (RE). The biocompatibility of optimised formulation on Caco‐2 cells was determined using 3‐[4,5‐dimethylthiazol‐2‐yl]‐2,5‐diphenyltetrazolium bromide assay. Then, it was subjected to ex‐vivo permeation using rat intestine. EFV‐loaded LNCs were found to be spherical shape in the range of 20–100 nm with EE of 82–97%. The best results obtained from LNCs prepared by 17.5% labrafac and 10% solutol HS15 when the volume ratio of the diluting aqueous phase to the initial emulsion was 3.5. The mean particle size, zeta potential, PdI, EE, drug loading%, and RE during 144 h of optimised formulation were confirmed to 60.71 nm, −35.93 mV, 0.09, 92.60, 7.39 and 55.96%, respectively. Optimised LNCs increased the ex vivo intestinal permeation of EFV when compared with drug suspension. Thus, LNCs could be promising for improved oral delivery of EFV.Inspec keywords: biomedical materials, solubility, drugs, encapsulation, emulsions, nanoparticles, particle size, nanofabrication, suspensions, toxicology, nanomedicine, cellular biophysics, lipid bilayers, electrokinetic effects, drug delivery systems, molecular biophysicsOther keywords: ex‐vivo permeation, diluting aqueous phase, mean particle size, zeta potential, drug loading, optimised formulation, ex vivo intestinal permeation, improved oral delivery, efavirenz oral delivery, optimisation, ex‐vivo gut permeation study, solubility, bioavailability, phase‐inversion temperature method, formulation variables, Box–Behnken design, polydispersity index, encapsulation efficiency, Caco‐2 cells, lipid nanocapsules, 3‐[4,5‐dimethylthiazol‐2‐yl]‐2,5‐diphenyltetrazolium bromide assay, EFV‐loaded LNC, drug suspension, size 20.0 nm to 100.0 nm, time 144.0 hour, size 60.71 nm, voltage ‐35.93 mV  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号