In this paper we describe a verification system for multi-agent programs. This is the first comprehensive approach to the verification of programs developed using programming languages based on the BDI (belief-desire-intention)
model of agency. In particular, we have developed a specific layer of abstraction, sitting between the underlying verification
system and the agent programming language, that maps the semantics of agent programs into the relevant model-checking framework.
Crucially, this abstraction layer is both flexible and extensible; not only can a variety of different agent programming languages
be implemented and verified, but even heterogeneous multi-agent programs can be captured semantically. In addition to describing this layer, and the semantic mapping inherent
within it, we describe how the underlying model-checker is driven and how agent properties are checked. We also present several
examples showing how the system can be used. As this is the first system of its kind, it is relatively slow, so we also indicate
further work that needs to be tackled to improve performance. 相似文献
The widespread use of cellular telephones and the availability of user-location information are facilitating the development of new personalized, location-based applications. However, as of today, most of these applications are unidirectional and text-based where the user subscribes and the system sends a text message when appropriate. This article describes a modular and general architecture that supports the development of interactive, multimedia, location-based applications, providing an extra level of service to the users. The flexibility of the architecture is demonstrated by presenting the wireless safety security system (Wi-Via) and other potential applications 相似文献
Visually impaired individuals often rely on assistive technologies such as white canes for independent navigation. Many electronic enhancements to the traditional white cane have been proposed. However, only a few of these proof-of-concept technologies have been tested with authentic users, as most studies rely on blindfolded non-visually impaired participants or no testing with participants at all. Experiments involving blind users are usually not contrasted with the traditional white cane. This study set out to compare an ultrasound-based electronic cane with a traditional white cane. Moreover, we also compared the performance of a group of visually impaired participants (N = 10) with a group of blindfolded participants without visual impairments (N = 31). The results show that walking speed with the electronic cane is significantly slower compared to the traditional white cane. Moreover, the results show that the performance of the participants without visual impairments is significantly slower than for the visually impaired participants. No significant differences in obstacle detection rates were observed across participant groups and device types for obstacles on the ground, while 79% of the hanging obstacles were detected by the electronic cane. The results of this study thus suggest that electronic canes present only one advantage over the traditional cane, namely in its ability to detect hanging obstacles, at least without prolonged practice. Next, blindfolded participants are insufficient substitutes for blind participants who are expert cane users. The implication of this study is that research into digital white cane enhancements should include blind participants. These participants should be followed over time in longitudinal experiments to document if practice will lead to improvements that surpass the performance achieved with traditional canes.