Visual shape parameters and aesthetic aspects of a product are one of the crucial factors for the success of a product in the market. The type and the value of the shape parameters plays an important role in visual appearance of a product and designers tends to be critical while deciding these parameters. The aesthetic aspect of a product has been matter of concern for researchers with its electromechanical design. The Kano model has been found to be a useful tool to establish the relationship between performance criteria and customer satisfaction. To achieve the desired customer satisfaction weight of each product criteria is determined by using Kano model. This study presents an integrative design approach combining the Kano model, Taguchi method and grey relation analysis to obtain the optimal combination of shape parameters and aesthetic aspects. Prioritized criteria of aesthetic attributes have been abstracted through proposed methodology. A case study has been presented to evolve a profile of a car. 相似文献
We propose tackling a “mini challenge” problem: a nontrivial verification effort that can be completed in 2–3 years, and will
help establish notational standards, common formats, and libraries of benchmarks that will be essential in order for the verification
community to collaborate on meeting Hoare’s 15-year verification grand challenge. We believe that a suitable candidate for
such a mini challenge is the development of a filesystem that is verifiably reliable and secure. The paper argues why we believe a filesystem is the right candidate for a mini challenge and describes
a project in which we are building a small embedded filesystem for use with flash memory.
The work described in this paper was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under
a contract with the National Aeronautics and Space Administration. 相似文献
Microsystem Technologies - Due to fast technological development, human beings generally depend upon computer and other digital equipments in different areas of concern/applications. Therefore,... 相似文献
This article describes RISCBOT (RISCBOT name has been derived from RISC lab and ‘bot’ of robot, a modular 802.11 b-enabled mobile autonomous robot built at the RISC lab of the University of Bridgeport. RISCBOT localizes itself and successfully fulfills www – enabled online user requests and navigates to various rooms, employing a visual recognition algorithm. This article describes the mechanical design, hardware and software algorithms of the robot, and the web-based interface for communicating with the robot.
⋆RISC lab: Interdisciplinary Robotics, Intelligent Sensing and Control laboratory at the University of Bridgeport. 相似文献
With the advancement of image acquisition devices and social networking services, a huge volume of image data is generated. Using different image and video processing applications, these image data are manipulated, and thus, original images get tampered. These tampered images are the prime source of spreading fake news, defaming the personalities and in some cases (when used as evidence) misleading the law bodies. Hence before relying totally on the image data, the authenticity of the image must be verified. Works of the literature are reported for the verification of the authenticity of an image based on noise inconsistency. However, these works suffer from limitations of confusion between edges and noise, post-processing operation for localization and need of prior knowledge about an image. To handle these limitations, a noise inconsistency-based technique has been presented here to detect and localize a false region in an image. This work consists of three major steps of pre-processing, noise estimation and post-processing. For the experimental purpose two, publicly available datasets are used. The result is discussed in terms of precision, recall, accuracy and f1-score on the pixel level. The result of the presented work is also compared with the recent state-of-the-art techniques. The average accuracy of the proposed work on datasets is 91.70%, which is highest among state-of-the-art techniques.
Approaches for content-based image querying typically extract a single signature from each image based on color, texture, or shape features. The images returned as the query result are then the ones whose signatures are closest to the signature of the query image. While efficient for simple images, such methods do not work well for complex scenes since they fail to retrieve images that match the query only partially, that is, only certain regions of the image match. This inefficiency leads to the discarding of images that may be semantically very similar to the query image since they may contain the same objects. The problem becomes even more apparent when we consider scaled or translated versions of the similar objects. We propose WALRUS (wavelet-based retrieval of user-specified scenes), a novel similarity retrieval algorithm that is robust to scaling and translation of objects within an image. WALRUS employs a novel similarity model in which each image is first decomposed into its regions and the similarity measure between a pair of images is then defined to be the fraction of the area of the two images covered by matching regions from the images. In order to extract regions for an image, WALRUS considers sliding windows of varying sizes and then clusters them based on the proximity of their signatures. An efficient dynamic programming algorithm is used to compute wavelet-based signatures for the sliding windows. Experimental results on real-life data sets corroborate the effectiveness of WALRUS'S similarity model. 相似文献
There is growing interest in algorithms for processing and querying continuous data streams (i.e., data seen only once in a fixed order) with limited memory resources. In its most general form, a data stream is actually an update stream, i.e., comprising data-item deletions as well as insertions. Such massive update streams arise naturally in several application domains (e.g., monitoring of large IP network installations or processing of retail-chain transactions). Estimating the cardinality of set expressions defined over several (possibly distributed) update streams is perhaps one of the most fundamental query classes of interest; as an example, such a query may ask what is the number of distinct IP source addresses seen in passing packets from both router R1 and R2 but not router R3?. Earlier work only addressed very restricted forms of this problem, focusing solely on the special case of insert-only streams and specific operators (e.g., union). In this paper, we propose the first space-efficient algorithmic solution for estimating the cardinality of full-fledged set expressions over general update streams. Our estimation algorithms are probabilistic in nature and rely on a novel, hash-based synopsis data structure, termed 2-level hash sketch. We demonstrate how our 2-level hash sketch synopses can be used to provide low-error, high-confidence estimates for the cardinality of set expressions (including operators such as set union, intersection, and difference) over continuous update streams, using only space that is significantly sublinear in the sizes of the streaming input (multi-)sets. Furthermore, our estimators never require rescanning or resampling of past stream items, regardless of the number of deletions in the stream. We also present lower bounds for the problem, demonstrating that the space usage of our estimation algorithms is within small factors of the optimal. Finally, we propose an optimized, time-efficient stream synopsis (based on 2-level hash sketches) that provides similar, strong accuracy-space guarantees while requiring only guaranteed logarithmic maintenance time per update, thus making our methods applicable for truly rapid-rate data streams. Our results from an empirical study of our synopsis and estimation techniques verify the effectiveness of our approach.Received: 20 October 2003, Accepted: 16 April 2004, Published online: 14 September 2004Edited by: J. Gehrke and J. Hellerstein.Sumit Ganguly: sganguly@cse.iitk.ac.in Current affiliation: Department of Computer Science and Engineering, Indian Institute of Technology, Kanpur, India 相似文献
Despite years of study on failure prediction, it remains an open problem, especially in large-scale systems composed of vast amount of components. In this paper, we present a dynamic meta-learning framework for failure prediction. It intends to not only provide reasonable prediction accuracy, but also be of practical use in realistic environments. Two key techniques are developed to address technical challenges of failure prediction. One is meta-learning to boost prediction accuracy by combining the benefits of multiple predictive techniques. The other is a dynamic approach to dynamically obtain failure patterns from a changing training set and to dynamically extract effective rules by actively monitoring prediction accuracy at runtime. We demonstrate the effectiveness and practical use of this framework by means of real system logs collected from the production Blue Gene/L systems at Argonne National Laboratory and San Diego Supercomputer Center. Our case studies indicate that the proposed mechanism can provide reasonable prediction accuracy by forecasting up to 82% of the failures, with a runtime overhead less than 1.0 min. 相似文献