首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
IC卡在实验室管理中的应用   总被引:3,自引:0,他引:3  
借助校园网资源,系统论述了IC卡在实验室管理中的应用所需的系统构成--IC卡,读写器,PC机,一卡多用技术的实现,IC卡使用维护,IC卡的技术安全等,列举IC卡在实验室管理中的常用应用,IC卡门锁,上机上网管理,IC卡物品领用等。  相似文献   

2.
本刊在98年5月号上刊登了“单片机家庭新成员——IC卡”一文后,收到不少读者来信,询问有关IC卡应用系统开发和使用的问题。为满足广大读者熟悉和使用IC卡的需求,自本期起开辟“IC卡应用技术讲座”,较系统地讨论IC卡应用系统开发、制作和使用方面的问题。内容将着重于IC卡应用系统开发、使用中的实际问题和经验,兼顾理论和原理探讨。希望有兴趣的读者关注这一栏目,并及时将您的要求和意见反馈给我们,以便更好地满足您的需要。讲座内容包括:IC卡的种类和特性;典型的IC卡应用系统;IC卡开发和制作环境;怎样对IC卡编程和使用;IC卡数据安全和加密方法;智能IC卡操作系统及应用;IC卡开发实例等。  相似文献   

3.
文中介绍了IC卡的技术性能,强调了IC卡的数据安全性,着重介绍了用于交通管理信息系统的IC卡和IC卡交通信息管理系统的数据安全管理,同时也指出了IC卡的应用前景。  相似文献   

4.
IC卡电表及用电信息管理系统   总被引:1,自引:0,他引:1  
介绍了IC卡电表及用电信息管理系统各部分的组成,它包括IC卡、IC卡电表、刷卡电路和信息管理系统四部分,给出了结构框图和软件流程图,在实际应用中达到了预期的目的。  相似文献   

5.
IC卡是轨道交通自动售检票(AFC)系统中的车票介质,IC卡用芯片是一种集成电路芯片,除了IC卡的特殊应用环境要求IC卡用芯片具有较小的体积及环境适应性外,更重要的就是IC卡用芯片的安全性。文章阐述了IC卡用芯片的安全是IC卡安全性的基础,对在AFC系统中产生安全问题的几方面进行了分析,讨论了从芯片的设计阶段起就采用的必要的安全保护措施。  相似文献   

6.
信用卡从物理结构上划分,可以分为磁卡和IC卡两大类.作为信用卡的一种新业务载体,IC卡有诸多的优点:安全、防伪性好、容量大、寿命长、可脱机处理等.IC卡必将成为未来信用卡发展的方向.一 IC卡的应用范围IC卡是集成电路卡的简称,也有人称其为灵巧卡、芯片卡或智能卡.IC卡是继磁卡之后的新一代数据卡,磁卡之所以要被IC卡所替代,是因为磁卡在应用中存在两个比较  相似文献   

7.
智能IC卡操作系统及其应用前几讲我们讨论了IC卡的基本原理、结构、IC卡的开发环境和开发基本步骤。但是把IC卡技术应用于现代管理中,组成一个IC卡应用环境并不是太容易的事。需要综合运用上面讨论过的IC卡知识和有关数据库的知识,来创建面向具体应用对象的应用环境。本讲中以北方工业大学校医院IC管理系统为背景,讨论在IC卡应用系统设计中所涉及的实际问题和处理方案。  相似文献   

8.
IC卡技术     
本文全面系统地对IC卡的发展历史、关键技术及应用前景进行了介绍,并就目前工业界普遍关心的IC卡安全技术进行了综合阐述。  相似文献   

9.
IC卡技术   总被引:1,自引:0,他引:1  
在70年代,随着超大规模集成电路和大容量存储芯片技术的发展,出现了一种集成电路卡(IntegratedCircuit Card,简称IC卡),人们又称它为智能卡(Smart Card)。 由于磁卡存在着存储容量小、功能弱、安全性差等缺点,因此在80年代IC卡的应用有了很大发展。虽然目前世界上磁卡仍然在广泛应用,但逐步被IC卡取代是必然的趋势。IC卡的应用具有重大意义,它将促进人类社会的信息化进程,将会成为信息时代的人机接口。人类在信息化社会中,生活的各个方面(衣、食、住、行及文化等)都将离不开IC卡,它将使人类在社会中  相似文献   

10.
IC卡的秘密     
智能卡(Smart Card)是指在一张给定大小的塑料卡片上面封装了集成电路芯片,用于存储和处理数据、所以智能卡在国内又称IC卡。 IC卡的概念是70年代提出的,法国BULL公司首创IC卡产品,并将这项技术应用到金融,交通,医疗,身份证明等多个方面。IC卡的核心是集成电路芯片,一般用3μm以下半导体技术制造。IC卡具有写  相似文献   

11.
Abstract This paper describes an approach to the design of interactive multimedia materials being developed in a European Community project. The developmental process is seen as a dialogue between technologists and teachers. This dialogue is often problematic because of the differences in training, experience and culture between them. Conditions needed for fruitful dialogue are described and the generic model for learning design used in the project is explained.  相似文献   

12.
This paper deals with the question: What are the criteria that an adequate theory of computation has to meet? (1) Smith’s answer: it has to meet the empirical criterion (i.e. doing justice to computational practice), the conceptual criterion (i.e. explaining all the underlying concepts) and the cognitive criterion (i.e. providing solid grounds for computationalism). (2) Piccinini’s answer: it has to meet the objectivity criterion (i.e. identifying computation as a matter of fact), the explanation criterion (i.e. explaining the computer’s behaviour), the right things compute criterion, the miscomputation criterion (i.e. accounting for malfunctions), the taxonomy criterion (i.e. distinguishing between different classes of computers) and the empirical criterion. (3) Von Neumann’s answer: it has to meet the precision and reliability of computers criterion, the single error criterion (i.e. addressing the impacts of errors) and the distinction between analogue and digital computers criterion. (4) “Everything” computes answer: it has to meet the implementation theory criterion by properly explaining the notion of implementation.  相似文献   

13.
面向查询的多文档摘要技术有两个难点 第一,为了保证摘要与查询密切相关,容易造成摘要内容重复,不够全面;第二,原始查询难以完整描述查询意图,需进行查询扩展,而现有查询扩展方法多依赖于外部语义资源。针对以上问题,该文提出一种面向查询的多文档摘要方法,利用主题分析技术识别出当前主题下的子主题,综合考虑句子所在的子主题与查询的相关度以及子主题的重要度两方面因素来选择摘要句,并根据词语在子主题之间的共现信息,在不使用任何外部知识的情况下,进行查询扩展。在DUC2006评测语料上的实验结果表明,与Baseline系统相比,该系统取得了更高的ROUGE评价值,基于子主题的查询扩展方法则进一步提高了摘要的质量。  相似文献   

14.
Although there are many arguments that logic is an appropriate tool for artificial intelligence, there has been a perceived problem with the monotonicity of classical logic. This paper elaborates on the idea that reasoning should be viewed as theory formation where logic tells us the consequences of our assumptions. The two activities of predicting what is expected to be true and explaining observations are considered in a simple theory formation framework. Properties of each activity are discussed, along with a number of proposals as to what should be predicted or accepted as reasonable explanations. An architecture is proposed to combine explanation and prediction into one coherent framework. Algorithms used to implement the system as well as examples from a running implementation are given.  相似文献   

15.
The new method of defuzzification of output parameters from the base of fuzzy rules for a Mamdani fuzzy controller is given in the paper. The peculiarity of the method is the usage of the universal equation for the area computation of the geometric shapes. During the realization of fuzzy inference linguistic terms, the structure changes from the triangular into a trapezoidal shape. That is why the universal equation is used. The method is limited and can be used only for the triangular and trapezoidal membership functions. Gaussian functions can also be used while modifying the proposed method. Traditional defuzzification models such as Middle of Maxima − MoM, First of Maxima − FoM, Last of Maxima − LoM, First of Suppport − FoS, Last of Support − LoS, Middle of Support − MoS, Center of Sums − CoS, Model of Height − MoH have a number of systematic errors: curse of dimensionality, partition of unity condition and absence of additivity. The above-mentioned methods can be seen as Center of Gravity − CoG, which has the same errors. These errors lead to the fact that accuracy of fuzzy systems decreases, because during the training root mean square error increases. One of the reasons that provokes the errors is that some of the activated fuzzy rules are excluded from the fuzzy inference. It is also possible to increase the accuracy of the fuzzy system through properties of continuity. The proposed method guarantees fulfilling of the property of continuity, as the intersection point of the adjustment linguistic terms equals 0.5 when a parametrized membership function is used. The causes of errors and a way to delete them are reviewed in the paper. The proposed method excludes errors which are inherent to the traditional and non- traditional models of defuzzification. Comparative analysis of the proposed method of defuzzification with traditional and non-traditional models shows its effectiveness.  相似文献   

16.
This paper provides the author's personal views and perspectives on software process improvement. Starting with his first work on technology assessment in IBM over 20 years ago, Watts Humphrey describes the process improvement work he has been directly involved in. This includes the development of the early process assessment methods, the original design of the CMM, and the introduction of the Personal Software Process (PSP)SM and Team Software Process (TSP){SM}. In addition to describing the original motivation for this work, the author also reviews many of the problems he and his associates encountered and why they solved them the way they did. He also comments on the outstanding issues and likely directions for future work. Finally, this work has built on the experiences and contributions of many people. Mr. Humphrey only describes work that he was personally involved in and he names many of the key contributors. However, so many people have been involved in this work that a full list of the important participants would be impractical.  相似文献   

17.
Impact of cognitive theory on the practice of courseware authoring   总被引:1,自引:0,他引:1  
Abstract The cognitive revolution has yielded unprecedented progress in our understanding of higher cognitive processes such as remembering and learning. It is natural to expect this scientific breakthrough to inform and guide the design of instruction in general and computer-based instruction in particular. In this paper I survey the different ways in which recent advances in cognitive theory might influence the design of computer-based instruction and spell out their implications for the design of authoring tools and tutoring system shells. The discussion will be divided into four main sections. The first two sections deal with the design and the delivery of instruction. The third section analyzes the consequences for authoring systems. In the last section I propose a different way of thinking about this topic.  相似文献   

18.
Possibilistic distributions admit both measures of uncertainty and (metric) distances defining their information closeness. For general pairs of distributions these measures and metrics were first introduced in the form of integral expressions. Particularly important are pairs of distributions p and q which have consonant ordering—for any two events x and y in the domain of discourse p(x)⪋ p(y) if and only if q(x) ⪋ q(y). We call such distributions confluent and study their information distances.

This paper presents discrete sum form of uncertainty measures of arbitrary distributions, and uses it to obtain similar representations of metrics on the space of confluent distributions. Using these representations, a number of properties like additivity. monotonicity and a form of distributivity are proven. Finally, a branching property is introduced, which will serve (in a separate paper) to characterize axiomatically possibilistic information distances.  相似文献   


19.
This paper is concerned with the problem of gain-scheduled H filter design for a class of parameter-varying discrete-time systems. A new LMI-based design approach is proposed by using parameter-dependent Lyapunov functions. Recommended by Editorial Board member Huanshui Zhang under the direction of Editor Jae Weon Choi. This work was supported in part by the National Natural Science Foundation of P. R. China under Grants 60874058, by 973 program No 2009CB320600, but also the National Natural Science Foundation of Province of Zhejiang under Grants Y107056, and in part by a Research Grant from the Australian Research Council. Shaosheng Zhou received the B.S. degree in Applied Mathematics and the M.Sc. and Ph.D. degrees in Electrical Engineering, in January 1992, July 1996 and October 2001, from Qufu Normal University and Southeast University. His research interests include nonlinear control and stochastic systems. Baoyong Zhang received the B.S. and M.Sc. degrees in Applied Mathematics, in July 2003 and July 2006, all from Qufu Normal University. His research interests include and nonlinear systems, robust control and filtering. Wei Xing Zheng received the B.Sc. degree in Applied Mathematics and the M.Sc. and Ph.D. degrees in Electrical Engineering, in January 1982, July 1984 and February 1989, respectively, all from the Southeast University, Nanjing, China. His research interests include signal processing and system identification.  相似文献   

20.
针对数据挖掘中挖掘过程不透明以及用户交互少的问题,本文设计并实现了VISDMiner系统。VISDMiner系统将可视化技术和数据挖掘技术结合在一起,提供对挖掘过程中各阶段产生的可视化子结果集的分析。用户可根据自己的领域知识和经验去调整数据挖掘算法模型的参数和可视化模型的参数,促进算法和挖掘分析过程的有效调优。为了处理高维数据集,VISDMiner系统采用一种基于最大信息系数的主成分分析改进算法MIC-PCA,该算法主要是针对传统PCA算法降维能力和分类准确率低的问题进行改进。实验结果表明,VISDMine不仅实现了数据挖掘过程的可视化,还提高了用户对数据挖掘〖JP2〗执行结果的可理解性,其采用的改进的MIC-PCA算法提高了PCA算法的降维能力和分类准确率。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号