The security of a deterministic quantum scheme for communication, namely the LM05 [1], is studied in presence of a lossy channel
under the assumption of imperfect generation and detection of single photons. It is shown that the scheme allows for a rate
of distillable secure bits higher than that pertaining to BB84 [2]. We report on a first implementation of LM05 with weak
pulses. 相似文献
The publish/subscribe model offers a loosely-coupled communication paradigm where applications interact indirectly and asynchronously. Publishers generate events that are sent to interested applications through a network of brokers. Subscribers express their interest by specifying filters that brokers can use for routing the events. Supporting confidentiality of messages being exchanged is still challenging. First of all, it is desirable that any scheme used for protecting the confidentiality of both the events and filters should not require publishers and subscribers to share secret keys. In fact, such a restriction is against the loose-coupling of the model. Moreover, such a scheme should not restrict the expressiveness of filters and should allow the broker to perform event filtering to route the events to the interested parties. Existing solutions do not fully address these issues. In this paper, we provide a novel scheme that supports (i) confidentiality for events and filters; (ii) allows publishers to express further constraints about who can access their events; (iii) filters that can express very complex constraints on events even if brokers are not able to access any information in clear on both events and filters; (iv) and, finally, it does not require publishers and subscribers to share keys. Furthermore, we show how we applied our scheme to a real-world e-health scenario, developed together with a hospital. We also describe the implementation of our solution in Java and the integration with an existing publish/subscribe system. 相似文献
Digital cameras, new generation phones, commercial TV sets and, in general, all modern devices for image acquisition and visualization can benefit from algorithms for image enhancement suitable to work in real time and preferably with limited power consumption. Among the various methods described in the scientific literature, Retinex-based approaches are able to provide very good performances, but unfortunately they typically require a high computational effort. In this article, we propose a flexible and effective architecture for the real-time enhancement of video frames, suitable to be implemented in a single FPGA device. The video enhancement algorithm is based on a modified version of the Retinex approach. This method, developed to control the dynamic range of poorly illuminated images while preserving the visual details, has been improved by the adoption of a new model to perform illuminance estimation. The video enhancement parameters are controlled in real time through an embedded microprocessor which makes the system able to modify its behavior according to the characteristics of the input images, and using information about the surrounding light conditions. 相似文献
Managing the resources in a large Web serving system requires knowledge of the resource needs for service requests of various types. In order to investigate the properties of Web traffic and its demand, we collected measurements of throughput and CPU utilization and performed some data analyses. First, we present our findings in relation to the time-varying nature of the traffic, the skewness of traffic intensity among the various types of requests, the correlation among traffic streams, and other system-related phenomena. Then, given such nature of web traffic, we devise and implement an on-line method for the dynamic estimation of CPU demand.
Assessing resource needs is commonly performed using techniques such as off-line profiling, application instrumentation, and kernel-based instrumentation. Little attention has been given to the dynamic estimation of dynamic resource needs, relying only on external and high-level measurements such as overall resource utilization and request rates. We consider the problem of dynamically estimating dynamic CPU demands of multiple kinds of requests using CPU utilization and throughput measurements. We formulate the problem as a multivariate linear regression problem and obtain its basic solution. However, as our measurement data analysis indicates, one is faced with issues such as insignificant flows, collinear flows, space and temporal variations, and background noise. In order to deal with such issues, we present several mechanisms such as data aging, flow rejection, flow combining, noise reduction, and smoothing. We implemented these techniques in a Work Profiler component that we delivered as part of a broader system management product. We present experimental results from using this component in scenarios inspired by real-world usage of that product. 相似文献
We introduce and study a two-dimensional variational model for the reconstruction of a smooth generic solid shape E, which may handle the self-occlusions and that can be considered as an improvement of the 2.1D sketch of Nitzberg and Mumford
(Proceedings of the Third International Conference on Computer Vision, Osaka, 1990). We characterize from the topological viewpoint the apparent contour of E, namely, we characterize those planar graphs that are apparent contours of some shape E. This is the classical problem of recovering a three-dimensional layered shape from its apparent contour, which is of interest
in theoretical computer vision. We make use of the so-called Huffman labeling (Machine Intelligence, vol. 6, Am. Elsevier,
New York, 1971), see also the papers of Williams (Ph.D. Dissertation, 1994 and Int. J. Comput. Vis. 23:93–108, 1997) and the paper of Karpenko and Hughes (Preprint, 2006) for related results. Moreover, we show that if E and F are two shapes having the same apparent contour, then E and F differ by a global homeomorphism which is strictly increasing on each fiber along the direction of the eye of the observer.
These two topological theorems allow to find the domain of the functional ℱ describing the model. Compactness, semicontinuity
and relaxation properties of ℱ are then studied, as well as connections of our model with the problem of completion of hidden
contours.
Traditionally, direct marketing companies have relied on pre-testing to select the best offers to send to their audience.
Companies systematically dispatch the offers under consideration to a limited sample of potential buyers, rank them with respect
to their performance and, based on this ranking, decide which offers to send to the wider population. Though this pre-testing
process is simple and widely used, recently the industry has been under increased pressure to further optimize learning, in
particular when facing severe time and learning space constraints. The main contribution of the present work is to demonstrate
that direct marketing firms can exploit the information on visual content to optimize the learning phase. This paper proposes
a two-phase learning strategy based on a cascade of regression methods that takes advantage of the visual and text features
to improve and accelerate the learning process. Experiments in the domain of a commercial Multimedia Messaging Service (MMS)
show the effectiveness of the proposed methods and a significant improvement over traditional learning techniques. The proposed
approach can be used in any multimedia direct marketing domain in which offers comprise both a visual and text component.
Giuseppe TribulatoEmail:
Sebastiano Battiato
was born in Catania, Italy, in 1972. He received the degree in Computer Science (summa cum laude) in 1995 and his Ph.D in
Computer Science and Applied Mathematics in 1999. From 1999 to 2003 he has lead the “Imaging” team c/o STMicroelectronics
in Catania. Since 2004 he works as a Researcher at Department of Mathematics and Computer Science of the University of Catania.
His research interests include image enhancement and processing, image coding and camera imaging technology. He published
more than 90 papers in international journals, conference proceedings and book chapters. He is co-inventor of about 15 international
patents. He is reviewer for several international journals and he has been regularly a member of numerous international conference
committees. He has participated in many international and national research projects. He is an Associate Editor of the SPIE
Journal of Electronic Imaging (Specialty: digital photography and image compression). He is director of ICVSS (International
Computer Vision Summer School). He is a Senior Member of the IEEE.
Giovanni Maria Farinella
is currently contract researcher at Dipartimento di Matematica e Informatica, University of Catania, Italy (IPLAB research
group). He is also associate member of the Computer Vision and Robotics Research Group at University of Cambridge since 2006.
His research interests lie in the fields of computer vision, pattern recognition and machine learning. In 2004 he received
his degree in Computer Science (egregia cum laude) from University of Catania. He was awarded a Ph.D. (Computer Vision) from
the University of Catania in 2008. He has co-authored several papers in international journals and conferences proceedings.
He also serves as reviewer numerous international journals and conferences. He is currently the co-director of the International
Summer School on Computer Vision (ICVSS).
Giovanni Giuffrida
is an assistant professor at University of Catania, Italy. He received a degree in Computer Science from the University of
Pisa, Italy in 1988 (summa cum laude), a Master of Science in Computer Science from the University of Houston, Texas, in 1992,
and a Ph.D. in Computer Science, from the University of California in Los Angeles (UCLA) in 2001. He has an extensive experience
in both the industrial and academic world. He served as CTO and CEO in the industry and served as consultant for various organizations.
His research interest is on optimizing content delivery on new media such as Internet, mobile phones, and digital tv. He published
several papers on data mining and its applications. He is a member of ACM and IEEE.
Catarina Sismeiro
is a senior lecturer at Imperial College Business School, Imperial College London. She received her Ph.D. in Marketing from
the University of California, Los Angeles, and her Licenciatura in Management from the University of Porto, Portugal. Before
joining Imperial College Catarina had been and assistant professor at Marshall School of Business, University of Southern
California. Her primary research interests include studying pharmaceutical markets, modeling consumer behavior in interactive
environments, and modeling spatial dependencies. Other areas of interest are decision theory, econometric methods, and the
use of image and text features to predict the effectiveness of marketing communications tools. Catarina’s work has appeared
in innumerous marketing and management science conferences. Her research has also been published in the Journal of Marketing Research, Management Science, Marketing Letters, Journal of Interactive Marketing, and International Journal of Research in Marketing. She received the 2003 Paul Green Award and was the finalist of the 2007 and 2008 O’Dell Awards. Catarina was also a 2007
Marketing Science Institute Young Scholar, and she received the D. Antonia Adelaide Ferreira award and the ADMES/MARKTEST
award for scientific excellence. Catarina is currently on the editorial boards of the Marketing Science journal and the International Journal of Research in Marketing.
Giuseppe Tribulato
was born in Messina, Italy, in 1979. He received the degree in Computer Science (summa cum laude) in 2004 and his Ph.D in
Computer Science in 2008. From 2005 he has lead the research team at Neodata Group. His research interests include data mining
techniques, recommendation systems and customer targeting.
相似文献
We define a family of Distributed Hash Table systems whose aim is to combine the routing efficiency of randomized networks—e.g.
optimal average path length O(log 2n/δlog δ) with δ degree—with the programmability and startup efficiency of a uniform overlay—that is, a deterministic system in which the overlay network is transitive and greedy routing is optimal. It is known that Ω(log n) is a lower bound on the average path length for uniform overlays with O(log n) degree (Xu et al., IEEE J. Sel. Areas Commun. 22(1), 151–163, 2004).
Our work is inspired by neighbor-of-neighbor (NoN) routing, a recently introduced variation of greedy routing that allows us to achieve optimal average path length in randomized networks. The advantage of our proposal is that of allowing
the NoN technique to be implemented without adding any overhead to the corresponding deterministic network.
We propose a family of networks parameterized with a positive integer c which measures the amount of randomness that is used. By varying the value c, the system goes from the deterministic case (c=1) to an “almost uniform” system. Increasing c to relatively low values allows for routing with asymptotically optimal average path length while retaining most of the advantages
of a uniform system, such as easy programmability and quick bootstrap of the nodes entering the system.
We also provide a matching lower bound for the average path length of the routing schemes for any c.
This work was partially supported by the Italian FIRB project “WEB-MINDS” (Wide-scalE, Broadband MIddleware for Network Distributed
Services), . 相似文献
Journal of Intelligent Information Systems - Conversational Recommender Systems have received widespread attention in both research and practice. They assist people in finding relevant and... 相似文献