首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 125 毫秒
1.
在信息管理系统中,经常会遇到信息的维护操作,树型目录以它直观性的特点在信息维护中起到重要的作用.为了在信息管理系统中提供一种方便的树型目录维护,介绍了树型目录在Web信息展示中的使用,分析了树型目录的特点,提出并实现了基于Web组件的树型目录.在具体实现过程中,使用了Java小应用程序,以达到树型目录内容的灵活定制和使用的方便性.通过在实际项目中的使用,检验并到达了树型目录设计的目的.只要修改相应的配置文件,即可达到修改目录内容的目的.  相似文献   

2.
李冠德  陈梦东 《微机发展》2005,15(11):112-114
在信息管理系统中,经常会遇到信息的维护操作,树型目录以它直观性的特点在信息维护中起到重要的作用。为了在信息管理系统中提供一种方便的树型目录维护,介绍了树型目录在Web信息展示中的使用,分析了树型目录的特点,提出并实现了基于Web组件的树型目录。在具体实现过程中,使用了Java小应用程序,以达到树型目录内容的灵活定制和使用的方便性。通过在实际项目中的使用,检验并到达了树型目录设计的目的。只要修改相应的配置文件,即可达到修改目录内容的目的。  相似文献   

3.
文章阐述了一个完整的政务信息资源目录体系的研究与设计;该目录体系分为4个子系统:编目系统、目录报送系统、目录管理系统和目录服务系统;针对地区特点,采用了两层目录体系架构,以J2EE体系结构为基础,利用struts框架进行开发,并使用了IBM MQ消息中间件和SSL保证数据的安全性;系统功能齐全,可靠性高,灵活性强,通用性好.  相似文献   

4.
基于LDAP的空管用户目录服务系统构建   总被引:1,自引:0,他引:1  
为整合空管各个业务管理和应用系统以及行政办公系统,研究了构建用户目录服务系统以有效的解决系统统一身份管理问题.在分析目录服务关键技术LDAP的特点和优势基础上,提出了空管用户目录服务系统的构建规划,其中包括目录服务器的部署,目录服务结构规划和属性定义等.并对LDAP协议中的目录访问和更新机制进行了深入探讨和针对性改进,通过活动目录仿真环境讨论了设计的正确性.  相似文献   

5.
活动目录是一种包含服务功能的目录,它是一种数据库,它的目录或数据库的结构类似于UNIX文件系统,具有良好的可扩展性。对活动目录访问通常利用LDAP(轻量级目录访问协议)。一般开发人员直接访问LDAP目录不仅编程难度大,而且造成大量重复性工作,浪费资源。本文运用了·net中访问LDAP目录的相关技术,结合作者在实际项目中的经验,根据LDAP目录树-叶型结构的特点,从子目录节点和用户节点两方面封装了对LDAP目录的访问。利用该封装类访问LDAP目录,降低了一般开发人员的开发难度,同时简化了代码,节省了资源。  相似文献   

6.
联邦式LDAP目录服务系统的研究与实现   总被引:3,自引:0,他引:3  
杨燕  顾君忠 《计算机应用》2004,24(6):159-161
文中针对目前采用LDAP协议和技术的目录服务系统之间无法进行有效互连的问题,提出了“联邦式LDAP目录管理”策略,分析了联邦式LDAP目录系统的要求和特点,提出联邦式多媒体信息目录管理系统框架结构,文中以形式化方法描述了实现方案,最后分析了系统的搜索效率和硬件开销。  相似文献   

7.
<正> 第一节 树状目录 UNIX文件系统的一个最大特点是其目录(管理簿)像图1那样是树状的。图中在下方线延伸的框部分叫做(狭义的)目录,末端的框部分是实际放有程序和数据的文件。这样的目录结构实际上的构成法,可用表1的ls命令查看。  相似文献   

8.
Windows7中有些特殊目录,例如末尾带"."的目录,以auX、com1、com2、prn、con、nul等设备名为名的目录等,这些目录因为名称早已被系统占用,用资源管理器是无法创建、访问和删除的,利用这一特点,你可以将特殊目录当作保险柜使用,例如将个人重要文件保存到里面,别人就看不到它(也无法删除),当然也就不会丢失啦!  相似文献   

9.
基于MVC模式空间元数据目录服务的实现   总被引:2,自引:1,他引:2  
在分析了空间元数据目录服务及其特点的基础上,提出利用MVC模式和XML实现空间元数据目录服务的方案,并给出了系统的详细设计和应用实例。  相似文献   

10.
本文基于LDAP目录服务系统,实现LDAP树形目录与关系数据库之间数据同步的功能。文章首先根据LDAP目录与关系数据库的特点,说明了数据同步的必要性,重点介绍数据同步实现方法,提出一种利用webservice接口实现数据同步的方式,并对其可行性进行了验证。  相似文献   

11.
This paper presents a simple grasp planning method for a multi-fingered hand. Its purpose is to compute a context-independent and dense set or list of grasps, instead of just a small set of grasps regarded as optimal with respect to a given criterion. By context-independent, we mean that only the robot hand and the object to grasp are considered. The environment and the position of the robot base with respect to the object are considered in a further stage. Such a dense set can be computed offline and then used to let the robot quickly choose a grasp adapted to a specific situation. This can be useful for manipulation planning of pick-and-place tasks. Another application is human–robot interaction when the human and robot have to hand over objects to each other. If human and robot have to work together with a predefined set of objects, grasp lists can be employed to allow a fast interaction.The proposed method uses a dense sampling of the possible hand approaches based on a simple but efficient shape feature. As this leads to many finger inverse kinematics tests, hierarchical data structures are employed to reduce the computation times. The data structures allow a fast determination of the points where the fingers can realize a contact with the object surface. The grasps are ranked according to a grasp quality criterion so that the robot will first parse the list from best to worse quality grasps, until it finds a grasp that is valid for a particular situation.  相似文献   

12.
The per service cost has been a serious impediment to wide spread usage of on-line digital continuous media service, especially in the entertainment arena. Although handling continuous media may be achievable due to technology advances in the past few years, its competitiveness in the market with existing service type such as video rental is still in question. In this paper, we propose a model for continuous media service in a distributed infrastructure which has a video warehouse and intermediate storages connected via a high speed communication network, in an effort to reduce the resource requirement to support a set of service requests. The storage resource and network resource to support a set of requests should be properly quantified to a uniform metric to measure the efficiency of the service schedule. We developed a cost model which maps the given service schedule to a quantity. The proposed cost model is used to capture the amortized resource requirement of the schedule and thus to measure the efficiency of the schedule. The distributed environment consists of a massive scale continuous media server called a video warehouse, and intermediate storages connected via a high speed communication network. An intermediate storage is located in each neighborhood, and its main purpose is to avoid the repeated delivery of the same file to a neighborhood. We consider a situation where a request for a video file is made sometime in advance. We develop a scheduling algorithm which strategically replicates the requested continuous media files at the various intermediate storages.  相似文献   

13.
We consider a model of a 24-degree-of-freedom monkey robot that is supposed to perform a brachiation locomotion, i.e. to swing from one row of a horizontal ladder to the next one using the arms. The robot hand is constructed as a planar hook so that the contact point, about which the robot swings, is a passive hinge. We identify the 10 most relevant degrees of freedom for this underactuated mechanical system and formulate a tractable search procedure consisting on the following steps: (a) to introduce a parametrized family of coordination patterns to be enforced on the dynamics with respect to a path coordinate; (b) to formulate geometric equality constraints that are necessary to achieve a periodic locomotion; (c) to generate trajectories from integrable reduced dynamics associated with the passive hinge; (d) to evaluate the energetic cost of transport. Moreover, we observe that a linear approximation of the reduced dynamics can be used for trajectory generation, which allows us to incorporate computation of an approximate gradient of the cost function into the search algorithm significantly improving the computational efficiency.  相似文献   

14.
This paper presents a statistical approach to estimating the performance of a superscalar processor. Traditional trace-driven simulators can take a large amount time to conduct a performance evaluation of a machine, especially as the number of instructions increases. The result of this type of simulation is typically tied to the particular trace that was run. Elements such as dependencies, delays, and stalls are all a direct result of the particular trace being run, and can differ from trace to trace. This paper describes a model designed to separate simulation results from a specific trace. Rather than running a trace-driven simulation, a statistical model is employed, more specifically a Poisson distribution, to predict how these types of delay affects performance. Through the use of this statistical model, a performance evaluation can be conducted using a general code model, with specific stall rates, rather than a particular code trace. This model allows simulations to quickly run tens of millions of instructions and evaluate the performance of a particular micro-architecture while at the same time, allowing the flexibility to change the structure of the architecture.  相似文献   

15.
Entropy-based multi-objective genetic algorithm for design optimization   总被引:4,自引:0,他引:4  
Obtaining a fullest possible representation of solutions to a multiobjective optimization problem has been a major concern in Multi-Objective Genetic Algorithms (MOGAs). This is because a MOGA, due to its very nature, can only produce a discrete representation of Pareto solutions to a multiobjective optimization problem that usually tend to group into clusters. This paper presents a new MOGA, one that aims at obtaining the Pareto solutions with maximum possible coverage and uniformity along the Pareto frontier. The new method, called an Entropy-based MOGA (or E-MOGA), is based on an application of concepts from the statistical theory of gases to a baseline MOGA. Two demonstration examples, the design of a two-bar truss and a speed reducer, are used to demonstrate the effectiveness of E-MOGA in comparison to the baseline MOGA.  相似文献   

16.
Cluster analysis, or clustering, refers to the analysis of the structural organization of a data set. This analysis is performed by grouping together objects of the data that are more similar among themselves than to objects of different groups. The sampled data may be described by numerical features or by a symbolic representation, known as categorical features. These features often require a transformation into numerical data in order to be properly handled by clustering algorithms. The transformation usually assigns a weight for each feature calculated by a measure of importance (i.e., frequency, mutual information). A problem with the weight assignment is that the values are calculated with respect to the whole set of objects and features. This may pose as a problem when a subset of the features have a higher degree of importance to a subset of objects but a lower degree with another subset. One way to deal with such problem is to measure the importance of each subset of features only with respect to a subset of objects. This is known as co-clustering that, similarly to clustering, is the task of finding a subset of objects and features that presents a higher similarity among themselves than to other subsets of objects and features. As one might notice, this task has a higher complexity than the traditional clustering and, if not properly dealt with, may present an scalability issue. In this paper we propose a novel co-clustering technique, called HBLCoClust, with the objective of extracting a set of co-clusters from a categorical data set, without the guarantees of an enumerative algorithm, but with the compromise of scalability. This is done by using a probabilistic clustering algorithm, named Locality Sensitive Hashing, together with the enumerative algorithm named InClose. The experimental results are competitive when applied to labeled categorical data sets and text corpora. Additionally, it is shown that the extracted co-clusters can be of practical use to expert systems such as Recommender Systems and Topic Extraction.  相似文献   

17.
The rectangle knapsack packing problem is to pack a number of rectangles into a larger stock sheet such that the total value of packed rectangles is maximized. The paper first presents a fitness strategy, which is used to determine which rectangle is to be first packed into a given position. Based on this fitness strategy, a constructive heuristic algorithm is developed to generate a solution, i.e. a given sequence of rectangles for packing. Then, a greedy strategy is used to search a better solution. At last, a simulated annealing algorithm is introduced to jump out of the local optimal trap of the greedy strategy, to find a further improved solution. Computational results on 221 rectangular packing instances show that the presented algorithm outperforms some previous algorithms on average.  相似文献   

18.
Mereotopology: A theory of parts and boundaries   总被引:2,自引:0,他引:2  
The paper is a contribution to formal ontology. It seeks to use topological means in order to derive ontological laws pertaining to the boundaries and interiors of wholes, to relations of contact and connectedness, to the concepts of surface, point, neighbourhood, and so on. The basis of the theory is mereology, the formal theory of part and whole, a theory which is shown to have a number of advantages, for ontological purposes, over standard treatments of topology in set-theoretic terms. One central goal of the paper is to provide a rigorous formulation of Brentano's thesis to the effect that a boundary can exist as a matter of necessity only as part of a whole of higher dimension of which it is the boundary. It concludes with a brief survey of current applications of mereotopology in areas such as natural-language analysis, geographic information systems, machine vision, naive physics, and database and knowledge engineering.  相似文献   

19.
A subsequence is obtained from a string by deleting any number of characters; thus in contrast to a substring, a subsequence is not necessarily a contiguous part of the string. Counting subsequences under various constraints has become relevant to biological sequence analysis, to machine learning, to coding theory, to the analysis of categorical time series in the social sciences, and to the theory of word complexity. We present theorems that lead to efficient dynamic programming algorithms to count (1) distinct subsequences in a string, (2) distinct common subsequences of two strings, (3) matching joint embeddings in two strings, (4) distinct subsequences with a given minimum span, and (5) sequences generated by a string allowing characters to come in runs of a length that is bounded from above.  相似文献   

20.
We define a virtual environment as a set of surroundings that appear to a user through computer-generated sensory stimuli. The level of immersion-or sense of being in another world-that a user experiences within a VE relates to how much stimuli the computer delivers to the user. Thus, one can classify VEs along a virtuality continuum, which ranges from the real world to an entirely computer-generated environment. We present a technology that allows seamless transitions between levels of immersion in VEs. Milgram and Kishino (1994) first proposed the concept of a virtuality continuum in the context of visual displays. The concept of a virtuality continuum extends to multimodal VEs, which combine multiple sensory stimuli, including 3D sound and haptic capability, leading to a multidimensional virtuality continuum. Emerging applications will benefit from multiple levels of immersion, requiring innovative multimodal technologies and the ability to traverse the multidimensional virtuality continuum.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号