首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 343 毫秒
1.
The model of the device of reading (visualization) of the hidden magnetic information from the holograms combined with magneto-optical layer is presented in the article. Ways of magnetic images formation on the protected documentation and their reading by magneto-optical methods are proposed. The reading head with the help of magneto-optical meridional Kerr effect allows to observe visually of “effect of blinking” from the hologram with the hidden magnetic layer. During the work the mathematical analysis magneto-optical Kerr or Faraday effects was carried out. The hidden magnetic image based on:
–  hard magnetic layer on basis Tb-Fe with perpendicular anisotropy
–  soft magnetic layers on a basis or permalloy.
Advantages of the device:
–  non contact reading of the magnetic information
–  difficulty of recurrence of magnetic images formation technology.
The optical scheme of devices contains a light source, the polarizer, the analyzer, the hologram with magneto-optic layers, and constant magnet. The hologram is placed between the polarizer and the analyzer. The text was submitted by the authors in English.  相似文献   

2.
3.
Web-based bid invitation platforms and reverse auctions are increasingly used by consumers for the procurement of goods and services. An empirical examination shows that with B-2-C these procurement methods generate considerable benefits for the consumer:
–  ⊎ Reverse auctions and bid invitation platforms generate high consumer surplus in the procurement of general and crafts services.
–  ⊎ The level of this consumer surplus is affected by the number of bidders. The duration of the auction and the starting price are less important.
–  ⊎ In the painting business prices are considerably lower than with traditional procurement channels.
–  ⊎ On bid invitation platforms, in most cases (> 55%) the bids with the lowest price are chosen.
  相似文献   

4.
If a message can have n different values and all values are equally probable, then the entropy of the message is log(n). In the present paper, we discuss the expectation value of the entropy, for an arbitrary probability distribution. We introduce a mixture of all possible probability distributions. We assume that the mixing function is uniform
•  either in flat probability space, i.e. the unitary n-dimensional hypertriangle
•  or in Bhattacharyya’s spherical statistical space, i.e. the unitary n-dimensional hyperoctant.
A computation is a manipulation of an incoming message, i.e. a mapping in probability space:
•  either a reversible mapping, i.e. a symmetry operation (rotation or reflection) in n-dimen sional space
•  or an irreversible mapping, i.e. a projection operation from n-dimensional to lower-dimensional space.
During a reversible computation, no isentropic path in the probability space can be found. Therefore we have to conclude that a computation cannot be represented by a message which merely follows a path in n-dimensional probability space. Rather, the point representing the mixing function travels along a path in an infinite-dimensional Hilbert space. In honour of prof. dr. Henrik Farkas (Department of Chemical Physics, Technical University of Budapest) an outstanding scientist and most remarkable human being who unfortunately left us on 21 July 2005.  相似文献   

5.
Alan Bundy 《AI & Society》2007,21(4):659-668
This paper is a modified version of my acceptance lecture for the 1986 SPL-Insight Award. It turned into something of a personal credo -describing my view of
  the nature of AI
  the potential social benefit of applied AI
  the importance of basic AI research
  the role of logic and the methodology of rational construction
  the interplay of applied and basic AI research, and
  the importance of funding basic AI.
These points are knitted together by an analogy between AI and structural engineering: in particular, between building expert systems and building bridges.  相似文献   

6.
Conclusion  We have provided a theoretical and methodological justification of complete construction of a universal formalized language of knowledge. We have proved the following:
–  the base term classification ensures unambiguous and deep indexation of documents;
–  the fixed sentence syntax makes it possible to standardize information-retrieval languages and automatic translation between languages;
–  the fixed message semantics provides the following opportunities: measuring the semantic information; rating the intensification of intellectual effort; eliminating unjustified duplication of research and publication; providing the user with timely necessary information in a form suitable for direct processing and use; organizing a national cost-efficient communication technology; solving linguistic problems of artificial intelligence and informatization of society; creating a reliable structural foundation for the development of a common unambiguous language for the entire humanity.
Deceased. Translated from Kibernetika i Sistemnyi Analiz, No. 4, pp. 154–162, July–August, 1997.  相似文献   

7.
8.
We present quantum algorithms for the following matching problems in unweighted and weighted graphs with n vertices and m edges:
•  Finding a maximal matching in general graphs in time .
•  Finding a maximum matching in general graphs in time .
•  Finding a maximum weight matching in bipartite graphs in time , where N is the largest edge weight.
Our quantum algorithms are faster than the best known classical deterministic algorithms for the corresponding problems. In particular, the second result solves an open question stated in a paper by Ambainis and Špalek (Proceedings of STACS’06, pp. 172–183, 2006).  相似文献   

9.
The advances in polymer materials and technologies for telecom applications are reported. The polymers include new highly halogenated acrylates, which possess absorption losses less than 0.25 dB/cm and refractive indices ranging from 1.3 to 1.5 in the 1.5 μm wavelength region. The halogenated liquid monomers are highly intermixable, photocurable under UV exposure and exhibit high contrast in polymerization. The polymer technologies developed at the Institute on Laser and Information Technologies of the Russian Academy of Sciences (ILIT RAS) include:
–  UV contact lithography permitting creation of single-mode polymer waveguides and waveguide arrays
–  submicron printing for fabricating corrugated waveguides and polymer phase masks
–  UV laser holography for writing refractive index gratings in polymer materials.
The technology for fabricating narrowband Bragg filters on the basis of single-mode polymer waveguides with laser-induced submicron index gratings is presented in detail. The filters possess narrowband reflection/transmission spectra in the 1.5 μm telecom wavelength region of 0.2–2.7 nm width, nearly rectangular shape of the stopband, reflectivity R > 99% and negligible radiation losses. They can be used for multiplexing/demultiplexing optical signals in high-speed DWDM fiber networks. The text was submitted by the authors in English.  相似文献   

10.
Garibaldo  F.  Rebecchi  E. 《AI & Society》2004,18(1):44-67
In this paper the authors, starting from the experience described and commented on in earlier work by Mancini and Sbordone, deal with the three main epistemological problems that the research group they participated in had to face:
–  The conflicting and ambiguous relationship between psychoanalysis and social research
–  The classical epistemological problem of the relationship between the subject and object of research within the perspective of action research
–  The problem arising from their experience, i.e., the risk of manipulation, and the way to deal with it from an epistemic perspective
The three problems are dealt with one at a time, but from a common perspective, i.e., the attempt to integrate the richness and variety of human subjectivity in social research. As to the relationship between psychoanalysis and social research, a special section is devoted to the implications of an integrated or convergent methodology on team-working in organisations.
F. GaribaldoEmail:
  相似文献   

11.
We consider hypotheses about nondeterministic computation that have been studied in different contexts and shown to have interesting consequences:
•  The measure hypothesis: NP does not have p-measure 0.
•  The pseudo-NP hypothesis: there is an NP language that can be distinguished from any DTIME language by an NP refuter.
•  The NP-machine hypothesis: there is an NP machine accepting 0* for which no -time machine can find infinitely many accepting computations.
We show that the NP-machine hypothesis is implied by each of the first two. Previously, no relationships were known among these three hypotheses. Moreover, we unify previous work by showing that several derandomizations and circuit-size lower bounds that are known to follow from the first two hypotheses also follow from the NP-machine hypothesis. In particular, the NP-machine hypothesis becomes the weakest known uniform hardness hypothesis that derandomizes AM. We also consider UP versions of the above hypotheses as well as related immunity and scaled dimension hypotheses.   相似文献   

12.
In this paper we classify several algorithmic problems in group theory in the complexity classes PZK and SZK (problems with perfect/statistical zero-knowledge proofs respectively). Prior to this, these problems were known to be in . As , we have a tighter upper bound for these problems. Specifically:
•  We show that the permutation group problems Coset Intersection, Double Coset Membership, Group Conjugacy are in PZK. Further, the complements of these problems also have perfect zero knowledge proofs (in the liberal sense). We also show that permutation group isomorphism for solvable groups is in PZK. As an ingredient of this protocol, we design a randomized algorithm for sampling short presentations of solvable permutation groups.
•  We show that the complement of all the above problems have concurrent zero knowledge proofs.
•  We prove that the above problems for black-box groups are in SZK.
•  Finally, we also show that some of the problems have SZK protocols with efficient provers in the sense of Micciancio and Vadhan (Lecture Notes in Comput. Sci. 2729, 282–298, 2003).
  相似文献   

13.
Zusammenfassung  Die Entwicklung komplexer eingebetteter Softwaresysteme, wie sie heute beispielsweise in Telekommunikationssystemen, Fahr- oder Flugzeugen oder mit der Steuersoftware von Automatisierungssystemen im Einsatz sind, erfordert ein strukturiertes, modulares Vorgehen und angemessene Techniken zur pr?zisen Beschreibung von Anforderungen, der Architektur des Systems mit ihren Komponenten, der Schnittstellen zur Systemumgebung und zwischen den internen Komponenten, der Wechselwirkung zwischen gesteuertem und steuerndem Teilsystem und schlie?lich der Implementierung. Mit dem frühzeitigen und durchg?ngigen Einsatz geeigneter Modelle (Stichwort UML (,,Unified Modeling Language“) und MDA (,,Model Driven Architecture“)) werden gro?e Hoffnungen verbunden, die Entwicklungsaufgaben beherrschbarer zu gestalten. Dieser Artikel beschreibt die theoretischen Grundlagen für ein konsequent modellbasiertes Vorgehen in Form eines zusammengeh?rigen, homogenen und dennoch modularen Baukastens von Modellen, der hierfür zwingend erforderlich ist. Besondere Schwerpunkte liegen hierbei auf den Themen
–  Schnittstellen,
–  Hierarchische Zerlegung,
–  Architekturen durch Komposition und Dekomposition,
–  Abstraktion durch Schichtenbildung,
–  Realisierung durch Zustandsmaschinen,
–  Verfeinerung von Hierarchie, Schnittstellen und Verhalten,
–  Wechsel der Abstraktionsebenen und
–  Integrierte Sicht auf die gesteuerten und steuernden Teilsysteme.
Dieser Baukasten der Modellierung muss wie bei allen anderen Ingenieursdisziplinen einer durchdachten, in sich stimmigen logisch-mathematischen Theorie entsprechen. Die hier vorgestellte Theorie besteht aus einem Satz von Notationen und Theoremen, die eine Basis für wissenschaftlich fundierte, werkzeugunterstützbare Methoden liefern und eine den Anwendungsdom?nen (Stichwort Dom?nenspezifische Sprachen) pragmatisch angepasste Vorgehensweise bringt. Für eine wissenschaftlich abgesicherte Methode steht weniger die syntaktische Form der Modellierungssprache als vielmehr die Modellierungstheorie im Zentrum. Die Repr?sentation von Modellen durch textuelle oder grafische Beschreibungsmittel ist ohne Zweifel eine wichtige Voraussetzung für den praktischen Einsatz von Modellierungstechniken, muss aber als komfortabler und grunds?tzlich austauschbarer ,,Syntactic Sugar“ gesehen werden.  相似文献   

14.
We show thatBPP can be simulated in subexponential time for infinitely many input lengths unless exponential time
–  collapses to the second level of the polynomial-time hierarchy.
–  has polynomial-size circuits and
–  has publishable proofs (EXPTIME=MA).
We also show thatBPP is contained in subexponential time unless exponential time has publishable proofs for infinitely many input lengths. In addition, we showBPP can be simulated in subexponential time for infinitely many input lengths unless there exist unary languages inMA-P.  相似文献   

15.
We consider the problem of efficiently sampling Web search engine query results. In turn, using a small random sample instead of the full set of results leads to efficient approximate algorithms for several applications, such as:
•  Determining the set of categories in a given taxonomy spanned by the search results;
•  Finding the range of metadata values associated with the result set in order to enable “multi-faceted search”;
•  Estimating the size of the result set;
•  Data mining associations to the query terms.
We present and analyze efficient algorithms for obtaining uniform random samples applicable to any search engine that is based on posting lists and document-at-a-time evaluation. (To our knowledge, all popular Web search engines, for example, Google, Yahoo Search, MSN Search, Ask, belong to this class.) Furthermore, our algorithm can be modified to follow the modern object-oriented approach whereby posting lists are viewed as streams equipped with a next method, and the next method for Boolean and other complex queries is built from the next method for primitive terms. In our case we show how to construct a basic sample-next(p) method that samples term posting lists with probability p, and show how to construct sample-next(p) methods for Boolean operators (AND, OR, WAND) from primitive methods. Finally, we test the efficiency and quality of our approach on both synthetic and real-world data. A preliminary version of this work has appeared in [3]. Work performed while A. Anagnostopoulos and A.Z. Broder were at IBM T. J. Watson Research Center.  相似文献   

16.
Consider an information network with threats called attackers; each attacker uses a probability distribution to choose a node of the network to damage. Opponent to the attackers is a protector entity called defender; the defender scans and cleans from attacks some part of the network (in particular, a link), which it chooses independently using its own probability distribution. Each attacker wishes to maximize the probability of escaping its cleaning by the defender; towards a conflicting objective, the defender aims at maximizing the expected number of attackers it catches. We model this network security scenario as a non-cooperative strategic game on graphs. We are interested in its associated Nash equilibria, where no network entity can unilaterally increase its local objective. We obtain the following results:
•  We obtain an algebraic characterization of (mixed) Nash equilibria.
•  No (non-trivial) instance of the graph-theoretic game has a pure Nash equilibrium. This is an immediate consequence of some covering properties we prove for the supports of the players in all (mixed) Nash equilibria.
•  We coin a natural subclass of mixed Nash equilibria, which we call Matching Nash equilibria, for this graph-theoretic game. Matching Nash equilibria are defined by enriching the necessary covering properties we proved with some additional conditions involving other structural parameters of graphs, such as Independent Sets.
–  We derive a characterization of graphs admitting Matching Nash equilibria. All such graphs have an Expanding Independent Set. The characterization enables a non-deterministic, polynomial time algorithm to compute a Matching Nash equilibrium for any such graph.
–  Bipartite graphs are shown to satisfy the characterization. So, using a polynomial time algorithm to compute a Maximum Matching for a bipartite graph, we obtain, as our main result, a deterministic, polynomial time algorithm to compute a Matching Nash equilibrium for any instance of the game with a bipartite graph.
A preliminary version of this work appeared in the Proceedings of the 16th Annual International Symposium on Algorithms and Computation, X. Deng and D. Du, eds., Lecture Notes in Computer Science, vol. 3827, pp. 288–297, Springer, December 2005. This work has been partially supported by the IST Program of the European Union under contract 001907 ( ), and by research funds at University of Cyprus.  相似文献   

17.
Two experiments examined the judged quality of videoconferencing as a function of three measures of IP network performance: bandwidth, latency, and packet loss. The experiments were realized in a laboratory using a network emulator and a commercial videoconferencing system. Experiment 1 used a fractional factorial design and all three parameters:
•  Bandwidth: 128 kbits/s, 384 kbits/s, 768 kbits/s
•  Latency: 0, 150, 300 ms one-way
•  Bursty packet loss: 0, 2%, 4% (using the “Gilbert-Eliot” method).
Experiment 2 was designed (a) to use random packet loss rather than bursty packet loss, and (b) so that a statistical interaction between bandwidth and packet loss could be detected, if present. Bandwidth levels were the same as in Experiment 1, but packet loss was set to 0, 1 and 2%. The experimental design was full factorial. In both experiments, pairs of non-expert judges held five-minute videoconferences for each combination of parameters, then rated the quality of system performance immediately after each videoconference. In both experiments statistical analysis showed that packet loss was the most important network performance parameter in predicting subjective quality of videoconferencing. These results agree with results on VoIP quality. Bandwidth and latency also affected the judges’ ratings, but to a smaller extent. In Experiment 2, an interaction between packet loss and bandwidth was detected: At lower bandwidths and greater packet loss, the subjective ratings were not as low as would be expected, contrary to the idea that quality would degrade catastrophically when both bandwidth and packet loss were simultaneously unfavorable.  相似文献   

18.
This paper summarizes our recent activities to support people to communicate with each other using public computer network systems. Unlike conventional teleconferencing systems, which are mainly for business meetings, we focus on informal communication in open orgnizations. So far, three different systems have been developed and actually tested.
–  • InSocia, we introduced vision agents which act on behalf of their users in a network. To enable a meeting to be scheduled at a mutually acceptable time, we proposed the scheme called non-committed scheduling.
–  Free Walk supports casual meetings among more than a few people. For this purpose, we provide a 3-D virtual space calledcommunity common where participants can behave just as in real life.
–  • In theICMAS’96 Mobile Assistant Project, on the other hand, we conducted an experiment in an actual international conference using 100 personal digital assistants and wireless phones. Various services were provided to increase the interactions among participants of the conference.
Based on these experiences, we are now moving towardscommunity-ware to support people to form a community based on computer network technologies. Toru Ishida, Dr. Eng.: He received the B. E., M. Eng. and D. Eng. degrees from Kyoto University, Kyoto, Japan, in 1976, 1978 and 1989, respectively. He is currently a professor of Department of Information Science, Kyoto University. He has been working on “Parallel, Distributed and Multiagent Production Systems (Springer, 1994)” from 1983. He first proposed parallel rule firing, and extended it to distributed rule firing. Organizational self-design was then introduced into distributed production systems for increasing adaptiveness. From 1990, he started working on “Real-time Search for Learning Autonomous Agents (Kluwer Academic Publishers, 1997).” Again, organizational adaptation becomes a central issue in controlling multiple problem solving agents. He recently initiated the study of “Communityware: Towards Global Collaboration (John Wiley and Sons, 1998)” with his colleagues.  相似文献   

19.
We consider multicommodity flow problems in capacitated graphs where the treewidth of the underlying graph is bounded by r. The parameter r is allowed to be a function of the input size. An instance of the problem consists of a capacitated graph and a collection of terminal pairs. Each terminal pair has a non-negative demand that is to be routed between the nodes in the pair. A class of optimization problems is obtained when the goal is to route a maximum number of the pairs in the graph subject to the capacity constraints on the edges. Depending on whether routings are fractional, integral or unsplittable, three different versions are obtained; these are commonly referred to respectively as maximum MCF, EDP (the demands are further constrained to be one) and UFP. We obtain the following results in such graphs.
•  An O(rlog rlog n) approximation for EDP and UFP.
•  The integrality gap of the multicommodity flow relaxation for EDP and UFP is .
The integrality gap result above is essentially tight since there exist (planar) instances on which the gap is . These results extend the rather limited number of graph classes that admit poly-logarithmic approximations for maximum EDP. Another related question is whether the cut-condition, a necessary condition for (fractionally) routing all pairs, is approximately sufficient. We show the following result in this context.
•  The flow-cut gap for product multicommodity flow instances is O(log r). This was shown earlier by Rabinovich; we obtain a different proof.
  相似文献   

20.
Self-calibration for imaging sensors is essential to many computer vision applications. In this paper, a new stratified self-calibration and metric reconstruction method is proposed for zooming/refocusing cameras under circular motion. With the assumption of known rotation angles, the circular motion constraints are first formulated. By enforcing the constraints gradually, metric reconstruction is retrieved up to a two-parameter ambiguity. The closed form expression of the absolute conic w.r.t. the two parameters is deduced. The ambiguity is then resolved with the square pixel assumption of the camera. The advantages of this method are mainly as follows:
(i)  It gives precise results by defining and enforcing the circular motion constraints;
(ii)  It is flexible that it allows both the focal lengths and the principal point to vary;
(iii)  It requires no scene constraint. Experimental results with both synthetic data and real images are presented, demonstrating the accuracy and robustness of the new method.
Y. S. HungEmail:
  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号