共查询到18条相似文献,搜索用时 171 毫秒
1.
该文提出了一种支持事务机制的Web服务组合QoS属性预测方法。分析了事务异常处理策略及其对组合服务执行流程的影响,定义了支持事务机制的组合服务描述模型,并基于此模型提出了组合服务QoS属性预测算法。实验证明该文方法在对具有事务机制的组合服务进行预测时,准确度优于已有的工作流预测方法,具有较好的可行性和有效性。 相似文献
2.
该文提出了一种支持事务机制的Web服务组合QoS属性预测方法。分析了事务异常处理策略及其对组合服务执行流程的影响,定义了支持事务机制的组合服务描述模型,并基于此模型提出了组合服务QoS属性预测算法。实验证明该文方法在对具有事务机制的组合服务进行预测时,准确度优于已有的工作流预测方法,具有较好的可行性和有效性。 相似文献
3.
研究人员已经致力于组合事务的恢复问题研究,但是大多数成果集中通过向后恢复来维持事务的一致性,补偿是向后恢复通常使用的一种手段,但是向后恢复的最大缺陷就是导致代价相当高,且向后恢复策略不能完全满足各种不同恢复需求。提出一种基于失败类型的恢复算法(包括向前、向后和替代恢复),其是一种基于扩展Petri网的形式化建模方法,为实现松弛ACID属性,引入状态托肯、数据托肯和QoS托肯,增加失败变迁和补偿变迁。失败发生时,动态计算终止依赖点TDP和补偿集,依据任务之间的控制流、数据流、时序、状态和行为依赖,获取任务的失败类型,选择合适的恢复策略,构造一个支持无缝添加/删除失败恢复的可执行模型。 相似文献
4.
针对移动计算环境中传统的并发控制机制并不足以支持移动事务执行的问题,提出一种新的事务管理机制以提高移动事务的处理效率.文中着重研究移动事务的并发控制机制,提出了一种基于动态计时器机制的并发控制策略.该策略主要是在锁管理器中添加动态计时器,通过调整事务执行队列以及动态的分配事务执行时间,达到减少事务的重启次数,提高事务提交率的目的.性能测试表明,所提出的并发控制策略能提高移动计算环境下事务的成功率. 相似文献
5.
空间服务组合执行过程中需要处理复杂多样的空间数据类型和大量的空间数据,采用集中式组合模型完成空间服务组合的通信开销代价将严重影响组合系统性能.分散式组合模型能有效克服上述缺点,并具备可扩性好,吞吐率高等其他特点.提出基于P 2P网络的空间服务组合执行技术,通过Geo-Peer来形成P 2P重叠网络从而构成空间服务组合分散式执行的基础设施.在执行时,首先将用户建模的流程图转化为能最大化表达并行度的最优流程图,其后将最优流程图分解为可部署到Geo-Peer上的多个子流程图,为全局流程分散到Geo-Peer网络的执行制定流程语义约束.模拟实验表明,基于P 2P网络的空间服务组合执行能克服集中式组合的诸多局限,有效提升组合系统的整体性能. 相似文献
6.
基于并发事务逻辑的Web服务编制验证 总被引:2,自引:1,他引:1
服务编制解决的是组织之间的业务集成问题,面临的是一个广泛分布、动态、自治、异构的网络环境,保障组合服务的正确执行以及相关特性的验证问题显得尤为重要.形式化方法是一种有效的解决方法,服务编制需要建立在严格的形式化模型的基础上,可以通过具有明确的、形式化语义的形式化模型研制验证工具来完成组合服务正确性的验证.本文基于并发事务逻辑(CTR:Concurrent TRansaction Logic)对服务编制的元素进行了描述和建模,给出了从WS-BPEL到并发事务逻辑的转换规则,讨论了服务编制在CTR中的验证问题以及WS-BPEL和CTR的表达能力,最后给出了一个实际的服务编制在CTR中建模的例子,验证了服务编制的CTR模型的有效性. 相似文献
7.
开放式环境下支持协作设计的并发控制机制的研究 总被引:1,自引:0,他引:1
针对现有的分布式并发控制机制存在的缺陷,提出一种适合开放式环境下协作设计事务的并发控制机制。该机制将只读事务和更新事务区分开来,只读事务在执行时遵守多版本时间戳排序协议,而更新事务的执行将按基于两阶段有序相容性封锁的多粒度封锁协议进行。这样不仅使用户可以快速得到查询结果,而且有效提高了协作设计事务的并发度。实验分析表明,该机制比较适合于开放式的协同设计环境。 相似文献
8.
针对片上系统SoC架构设计和嵌入式软件开发的需求,采用事务级建模方法使用SystemC完成了基于SPARC V8的事务级SoC验证平台的设计.为降低设计复杂度和提高仿真速度,基于解释-执行技术完成SPARC V8处理器指令精确事务级模型建模,并利用SystemC中的分层通道机制完成AMAB总线、中断控制器、UART、定时器等设备的事务级建模.完成事务级SoC验证平台的构建后,使用测试基准程序组Mibench对该验证平台的功能和仿真速度进行了验证.仿真结果证明了其功能正确,并且仿真速度相对于RTL SoC验证平台有大幅度的提高. 相似文献
9.
10.
11.
12.
With the intensive use of the internet, patient centric healthcare systems shifted away from paper-based records towards a computerized format. Electronic patient centric healthcare databases contain information about patients that should be kept available for further reference. Healthcare databases contain potential data that makes them a goal for attackers. Hacking into these systems and publishing their contents online exposes them to a challenge that affects their continuity. Any denial of this service will not be tolerated since we cannot know when we need to retrieve a patient’s record. Denial of service affects the continuity of the healthcare system which in turn threatens patients’ lives, decreases the efficiency of the healthcare system and increases the operating costs of the attacked healthcare organization. Although there are many defensive security methods that have been devised, nonetheless malicious transactions may find a way to penetrate the secured safeguard and then modify critical data of healthcare databases. When a malicious transaction modifies a patient record in a database, the damage may spread to other records through valid transactions. Therefore, recovery techniques are required. The efficiency of the data recovery algorithm is substantial for e-healthcare systems. A patient cannot wait too long for his/her medical history to be recovered so that the correct medication be prescribed. Nevertheless, in order to have fast data recovery, an efficient damage assessment process should precede the recovery stage. The damage assessment must be performed as the intrusion detection system detects the malicious activity. The execution time of the recovery process is a crucial factor for measuring the performance because it is directly proportional to the denial of service time of any healthcare system. This paper presents a high performance damage assessment and recovery algorithm for e-healthcare systems. The algorithm provides fast damage assessment after an attack by a malicious transaction to keep the availability of the e-healthcare database. Reducing the execution time of recovery is the key target of our algorithm. The proposed algorithm outperforms the existing algorithm. It is about six times faster than the most recent proposed algorithm. In the worst case, the proposed algorithm takes 8.81?ms to discover the damaged part of the database; however, the fastest recent algorithm requires 50.91?ms. In the best case, the proposed algorithm requires 0.43?ms, which is 86 times faster than the fastest recent work. This is a significant reduction of execution time compared with other available approaches. Saving the damage assessment time means shorter denial of service periods, which in turn guarantees the continuity of the patient centric healthcare system. 相似文献
13.
Conventional DB systems suffer from poor execution speed due to disk I/O bottleneck. Recently, with a drop in memory chip price and rapid development of mass storage memory chip technology, research on main memory database systems has gained wide attention. This paper discusses the transaction management technique in a client-server main-memory DB environment. Although most researchers have studied client-server systems, the recovery technique has only been investigated in M. Franklin et al. TR 1081, 1992. The current recovery techniques transfer generated log records and their data pages from the client to the server during transaction execution in the client site. The problem associated with this approach is the increased data transfer time in the network and the global synchronization is not guaranteed. In this paper, client transfers only the completed log records to the server and resolves the problems in the current recovery techniques. In addition, the server manages only the execution of completed log records suggesting a simple recovery algorithm. Client uses system concurrency, fully, by acting to abort actions and it suggests a page unit recovery technique to reduce time that is required in the whole database. 相似文献
14.
SystemC has become a de-facto standard language for SoC and ASIP designs. The verification of implementation with SystemC is the key to guarantee the correctness of designs and prevent the errors from propagating to the lower levels. In this project, we attempt translate SystemC programs to formal models and use existing model checkers to implement the verification. The method we proposed is based on a semantic translation method which translates sequential execution statements described as software character to parallel execution ones which are more closely with the implementation of hardware. This kind of conversion is inevitable to verify hardware designs but is overlooked in related works. The main contribution of this work is a translation method which can preserve the semantic consistency while building SMV model for SystemC design. We present the translation rules and implement a prototype tool which supports a subset of SystemC to demonstrate the effectiveness of our method. 相似文献
15.
In the process of knowledge service,in order to meet the fragmentation management needs of intellectualization,knowledge ability,refinement and reorganization content resources.Through deep analysis and mining of semantic hidden knowledge,technology,experience,and information,it broke through the existing bottleneck of traditional semantic parsing technology from Text-to-SQL.The PT-Sem2SQL based on the pre-training mechanism was proposed.The MT-DNN pre-training model mechanism combining Kullback-Leibler technology was designed to enhance the depth of context semantic understanding.A proprietary enhancement module was designed that captured the location of contextual semantic information within the sentence.Optimize the execution process of the generated model by the self-correcting method to solve the error output during decoding.The experimental results show that PT-Sem2SQL can effectively improve the parsing performance of complex semantics,and its accuracy is better than related work. 相似文献
16.
17.
18.
从标签系统中生成层次体系可以支持多种类型的应用,具备重要的意义.当前的研究主要集中于发现标签间的关系,但对如何利用这些关系形成高质量的层次体系却关注不足.针对这一现状,研究了支持Web 2.0标签层次体系构建的关系识别及层次组合方法,通过分析并识别已发现的标签间关系所具有的不同类型提升了标签间关系的质量,并提出基于语义流动分析的层次组合方法实现了更高质量的层次体系构建.应用多种评估指标的实验结果表明应用关系识别及使用语义流分析方法可以获得相比评估方法更高质量的标签层次体系. 相似文献