首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 828 毫秒
1.
林闯  孔祥震  周寰 《软件学报》2009,20(7):1986-2004
随着计算机软、硬件技术的不断进步和应用需求的日益增长,以计算机为中心的计算系统的应用范围越来越广,其复杂程度也在迅速提高,人们对如何评估和提高计算系统的可信赖性的需求日益迫切.首先给出了计算系统的可信赖性的定义,并系统地定义了一整套量化评价指标;同时,对计算系统面临的各种可信赖性威胁进行了详细的归类分析.传统的方法难以应对复杂系统面临的各种可信赖性问题,人们仍在不断地寻求新的技术.虚拟化技术在这种应用背景下走向复兴,成为一大研究热点.介绍了已有的虚拟化技术在增强系统可信赖性上相关的研究成果,并且总结了虚拟化技术在增强系统可信赖性方面的各种特性和机制.然而由于现有的计算系统体系结构的限制,难以将虚拟化技术在增强系统可信赖性方面的优势充分地发挥出来.面向服务的体系结构(service oriented architecture,简称SOA)以其松散耦合、平台无关性等特点很好地适应了虚拟化技术的需求.因此,最后将SOA 和虚拟化技术相结合,提出了一种增强计算系统可信赖性的系统架构,即面向服务的虚拟化SOV(service oriented virtualization),并且分析了SOV 系统如何在遭受各种可信赖性威胁时,运用体系结构优势和虚拟化技术的各种机制保证系统可信赖性.  相似文献   

2.
随机Petri网(SPN)是一种灵活有力的建模工具,在可信赖性研究方面已形成较为成熟的理论体系.采用SPN网对机群系统前端分配器及后端服务器进行建模,通过在系统建模时对前端分配器给予必要的修复保证,给出整个机群系统的SPN模型,在模型分析的基础上给出了机群系统的可信赖性参数,并对模型进行了仿真分析.仿真结果表明,在对关键系统前端分配器给予修复保证后,系统模型的稳态可用性将大大提高.  相似文献   

3.
跳跃索引是一种可信赖性索引,但只能为严格单调递增的序列建立索引,不能处理非顺序序列.为了解决这个问题,文中提出了一种新的索引,它可以对任意顺序的序列建立索引,并且依然保证索引的可信赖性.通过在原有跳跃索引结构中加入左侧跳跃指针的方法,索引节点可以根据待加入节点值的大小将其纳入自己的左侧或右侧指针以处理随机序列;索引结构中的每一个节点到根节点的路径固定且唯一,保证了索引的可信赖性.实验结果和理论证明都表明该索引是可以处理随机序列的可信赖索引,相对原有索引,索引建立复杂度明显降低且具有相同的查找复杂度.文中的创新之处是在保证索引的可信赖性的基础上解决了跳跃索引不能为随机序列建立索引的问题.  相似文献   

4.
分布式环境下需要可传递授权机制,而传统委托授权模型的委托授权不但缺乏对时间和空间的约束,而且仅是基于主体间的信任关系人为的确定可委托授权的主体,是不精确且模糊的。针对上述问题,文章提出了基于图的具有时空约束的可信赖委托授权模型,不仅利用模糊理论来确定可信赖的委托主体,而且增加了时间和空间的约束,分析并解决了循环授权和授权撤销问题。该模型满足了应用中的时空约束、可信赖传递授权等安全需求,具有普遍适用性。  相似文献   

5.
克服了马尔可夫假设条件的限制,假定系统寿命、修复性维修和预防性维修的修复时间均遵从一般概 率分布.利用离散时间模型在数值计算方面的优势,建立了离散时间下系统正常工作、修复性维修和预防性维修三 个状态之间的转移关系,在此基础上建立了一般概率分布下考虑预防性维修的ADC 模型的可信赖度D.数值算例 说明了该评估方法有助于选择合适的预防性修复周期来提高系统的效能.  相似文献   

6.
杨辉  彭晗  朱建勇  聂飞平 《计算机仿真》2021,38(8):328-332,343
谱聚类可以任意形状的数据进行聚类,在聚类集成中能够有效的提高基聚类的质量.以往的聚类集成算法中,聚类集成得到的结果并不是最终聚类结果,还需要利用聚类算法来获得最终聚类结果,在整个过程中会使得解由离散-连续-离散的转变.提出了一种基于谱聚类的双边聚类集成算法.算法首先在生成阶段使用谱聚类算法来获得基聚类,通过标准互信息来选取基聚类.将选出来基聚类和样本作为图的顶点,并对构建的图利用双边聚类算法对基聚类和样本同时聚类直接得到最终聚类结果.在实验中,将所提方法与一些聚类集成算法进行了比较,取得了较好的结果.  相似文献   

7.
针对采用主观分析法对基于流形学习的非线性降维效果进行评价存在主观性强,缺乏必要的量化计算进行指导问题,提出利用可信赖性和连续性两个指标对流形降维效果进行量化评价。其中,可信赖性用于衡量流形降维可视化效果图的可信度,连续性则旨在分析原邻域的保持性。对常用的基于流形学习的非线性降维方法进行分类和对比研究,并在经典数据集Swissroll、Swisshole、Twopeaks 、Helix和Puncturedsphere上利用可信赖性和连续性指标进行实验和对比分析,实验结果验证了方法的有效性。  相似文献   

8.
木马植入等恶意攻击行为给通信终端带来了严重威胁,为此,提出一种可信赖云计算下的通信终端攻击行为识别算法。利用数据采集模块获取通信终端镜像的数据流,通过可信性验证机制将可信任链扩展到云计算环境的虚拟机管理器和通信终端,检测通信终端运行环境的可信性后,攻击行为识别模块采用贝叶斯算法判断数据流是否包含攻击行为,并通过计算攻击行为数据的最大后验概率判断攻击行为所属类别,并将检测结果反映给管理模块,结合速率限制模块限制含有攻击行为的数据流,直到通信终端所受攻击行为结束。实验结果表明:该算法加入可信性动态验证机制能有效提升通信终端访问安全性,并能保证数据在通信终端遭受攻击行为时的顺利传输;不同程度干扰环境下的通信终端攻击行为识别平均绝对百分误差始终低于0.25%。  相似文献   

9.
针对建设工程招投标过程中招标、投标和评标方在审核验算工程造价中的工程量计算问题,提出了一种基于编译技术的可信赖计算方法的设计和实现过程.  相似文献   

10.
混合数据聚类是聚类分析中一个重要的问题。现有的混合数据聚类算法主要是在全体样本的相似性度量的基础上进行聚类,因此对大规模数据进行聚类时,算法效率不高。基于此,设计了一种新的抽样策略,在此基础上,提出了一种基于抽样的大规模混合数据聚类集成算法。该算法对利用新的抽样策略得到的多个样本子集分别进行聚类,并将结果集成得到最终聚类结果。实验证明,与改进的K-prototypes算法相比,该算法的效率有了显著提高,同时聚类有效性指标基本相同。  相似文献   

11.
Robbins  Scott 《Minds and Machines》2019,29(4):495-514

There is widespread agreement that there should be a principle requiring that artificial intelligence (AI) be ‘explicable’. Microsoft, Google, the World Economic Forum, the draft AI ethics guidelines for the EU commission, etc. all include a principle for AI that falls under the umbrella of ‘explicability’. Roughly, the principle states that “for AI to promote and not constrain human autonomy, our ‘decision about who should decide’ must be informed by knowledge of how AI would act instead of us” (Floridi et al. in Minds Mach 28(4):689–707, 2018). There is a strong intuition that if an algorithm decides, for example, whether to give someone a loan, then that algorithm should be explicable. I argue here, however, that such a principle is misdirected. The property of requiring explicability should attach to a particular action or decision rather than the entity making that decision. It is the context and the potential harm resulting from decisions that drive the moral need for explicability—not the process by which decisions are reached. Related to this is the fact that AI is used for many low-risk purposes for which it would be unnecessary to require that it be explicable. A principle requiring explicability would prevent us from reaping the benefits of AI used in these situations. Finally, the explanations given by explicable AI are only fruitful if we already know which considerations are acceptable for the decision at hand. If we already have these considerations, then there is no need to use contemporary AI algorithms because standard automation would be available. In other words, a principle of explicability for AI makes the use of AI redundant.

  相似文献   

12.

This paper reflects my address as IAAIL president at ICAIL 2021. It is aimed to give my vision of the status of the AI and Law discipline, and possible future perspectives. In this respect, I go through different seasons of AI research (of AI and Law in particular): from the Winter of AI, namely a period of mistrust in AI (throughout the eighties until early nineties), to the Summer of AI, namely the current period of great interest in the discipline with lots of expectations. One of the results of the first decades of AI research is that “intelligence requires knowledge”. Since its inception the Web proved to be an extraordinary vehicle for knowledge creation and sharing, therefore it’s not a surprise if the evolution of AI has followed the evolution of the Web. I argue that a bottom-up approach, in terms of machine/deep learning and NLP to extract knowledge from raw data, combined with a top-down approach, in terms of legal knowledge representation and models for legal reasoning and argumentation, may represent a promotion for the development of the Semantic Web, as well as of AI systems. Finally, I provide my insight in the potential of AI development, which takes into account technological opportunities and theoretical limits.

  相似文献   

13.
Many applications need to solve the deadline guaranteed packet scheduling problem. However, it is a very difficult problem if three or more deadlines are present in a set of packets to be scheduled. The traditional approach to dealing with this problem is to use EDF (Earliest Deadline First) or similar methods. Recently, a non-EDF based algorithm was proposed that constantly produces a higher throughput than EDF-based algorithms by repeatedly finding an optimal scheduling for two classes. However, this new method requires the two classes be non-overloaded, which greatly restricts its applications. Since the overloaded situation is not avoidable from one iteration to the next in dealing with multiple classes, it is compelling to answer the open question: Can we find an optimal schedule for two overloaded classes efficiently? This paper first proves that this problem is NP-complete. Then, this paper proposes an optimal preprocessing algorithm that guarantees to drop a minimum number of packets from the two classes such that the remaining set is non-overloaded. This result directly improves on the new method.  相似文献   

14.
问答系统旨在用准确、简洁的答案回答用户用自然语言提出的问题。以旅游信息服务为应用背景,提出了基于领域知识的问答对自动提取方法。考察了常见旅游问题,建立了领域知识,在此基础上,设计了用户问题模式匹配算法和答案提取算法,对于不能匹配模式的问题,采用句子相似度计算得到相关的答案。实验结果表明,提出的方法是可行的,实现了旅游问题的自动问答。  相似文献   

15.
People encounter more information than they can possibly use every day. But all information is not necessarily of equal value. In many cases, certain information appears to be better, or more trustworthy, than other information. And the challenge that most people then face is to judge which information is more credible. In this paper we propose a new problem called Corroboration Trust, which studies how to find credible news events by seeking more than one source to verify information on a given topic. We design an evidence‐based corroboration trust algorithm called TrustNewsFinder, which utilizes the relationships between news articles and related evidence information (person, location, time and keywords about the news). A news article is trustworthy if it provides many pieces of trustworthy evidence, and a piece of evidence is likely to be true if it is provided by many trustworthy news articles. Our experiments show that TrustNewsFinder successfully finds true events among conflicting information and identifies trustworthy news better than the popular search engines. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

16.
软件的可信性很大程度上依赖于程序代码的可信性。影响软件可信性的主要因素包括来自软件内部的代码缺陷、代码错误、程序故障以及来自软件外部的病毒、恶意代码等,因此从代码角度来保证软件的可信性是实现可信软件的重要途径之一。编译器作为重要的系统软件之一,其可信性对整个计算机系统而言具有非常重要的意义。软件程序一般都需要经过编译器编译后方能执行,如果编译器不可信,则无法保证其所生成代码的可信性。本文主要讨论设计和实现可信编译器的主要思路和关键技术。  相似文献   

17.
With the proliferation of mobile devices and wireless technologies, location based services (LBSs) are becoming popular in smart cities. Two important classes of LBSs are Nearest Neighbor (NN) queries and range queries that provide user information about the locations of point of interests (POIs) such as hospitals or restaurants. Answers of these queries are more reliable and satisfiable if they come from trustworthy crowd instead of traditional location service providers (LSPs). We introduce an approach to evaluate NN and range queries with crowdsourced data and computation that eliminates the role of an LSP. In our crowdsourced approach, a user evaluates LBSs in a group. It may happen that group members do not have knowledge of all POIs in a certain area. We present efficient algorithms to evaluate queries with accuracy guarantee in incomplete databases. Experiments show that our approach is scalable and incurs less computational overhead.  相似文献   

18.
In this paper, we prove that a query plan is safe in tuple independent probabilistic databases if and only if its every answer tuple is tree structured in probabilistic graphical models. We classify hierarchical queries into core and non-core hierarchical queries and show that the existing methods can only generate safe plans for core hierarchical queries. Inspired by the bucket elimination framework, we give the sufficient and necessary conditions for the answer relation of every candidate sub-query to be used as a base relation. Finally, the proposed algorithm generates safe plans for extensional query evaluation on non-boolean hierarchical queries and invokes the SPROUT algorithm [24] for intensional query evaluation on boolean queries. A case study on the TPC-H benchmark reveals that the safe plans of Q7 and Q8 can be evaluated efficiently. Furthermore, extensive experiments show that safe plans generated by the proposed algorithm scale well.  相似文献   

19.
Medical artificial intelligence (AI) systems have been remarkably successful, even outperforming human performance at certain tasks. There is no doubt that AI is important to improve human health in many ways and will disrupt various medical workflows in the future. Using AI to solve problems in medicine beyond the lab, in routine environments, we need to do more than to just improve the performance of existing AI methods. Robust AI solutions must be able to cope with imprecision, missing and incorrect information, and explain both the result and the process of how it was obtained to a medical expert. Using conceptual knowledge as a guiding model of reality can help to develop more robust, explainable, and less biased machine learning models that can ideally learn from less data. Achieving these goals will require an orchestrated effort that combines three complementary Frontier Research Areas: (1) Complex Networks and their Inference, (2) Graph causal models and counterfactuals, and (3) Verification and Explainability methods. The goal of this paper is to describe these three areas from a unified view and to motivate how information fusion in a comprehensive and integrative manner can not only help bring these three areas together, but also have a transformative role by bridging the gap between research and practical applications in the context of future trustworthy medical AI. This makes it imperative to include ethical and legal aspects as a cross-cutting discipline, because all future solutions must not only be ethically responsible, but also legally compliant.  相似文献   

20.
Artificial Intelligence (AI) software is a reality, but only for limited classes of problems. In general, AI problems are significantly different from those of conventional software engineering. The differences suggest a different program development methodology for AI problems: one that does not readily yield programs with the desiderata of practical software (reliability, robustness, etc.). In addition, the problem of machine learning must be solved (to some degree) before the full potential of AI can be realized, but the resultant self-adaptive software is likely to further aggravate the software crisis. Realization of the full potential of AI in practical software awaits some prerequisite breakthroughs in both basic AI problems and an appropriate AI software development methodology.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号