首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
This paper presents a fabrication method of three-dimensional micro-structures consisting of high aspect ratio inclined micro-pillars using simple photolithography. The width and height of micro-pillars were 10 m and 200 m (the aspect ratio was about 20). The SU-8 coated on the Cr patterned Pyrex glass substrate was exposed from the backside with an angle to fabricate inclined micro-pillars. The 3-D micro-structures were fabricated by repeating the backside exposure with different angles. The shape of the micro-structure was defined by the number of exposures and the UV irradiation angles. The complicated micro-structures were fabricated by multi-angle exposures around two axes, as well as around one axis.This work was supported by: the Japanese Ministry of Education, Culture, Sport, Science and Technology Grant-in-Aids for COE Research of Waseda University, Scientific Research Priority Area (B) No. 13124209, Japan Society for the Promotion Science Grant-in-Aids for Creative Scientific Research No. 13BS0024, Nanotechnology Support Project of the Ministry of Education, Culture, Sports, Science and Technology (MEXT) Japan, 21COE Practical Nano-Chemistry from MEXT Japan and Waseda University Grant for Special Research Project Individual Research No. 2002A-865.  相似文献   

2.
Unification algorithms have been constructed for semigroups and commutative semigroups. This paper considers the intermediate case of partially commutative semigroups. We introduce classesN and of such semigroups and justify their use. We present an equation-solving algorithm for any member of the classN. This algorithm is relative to having an algorithm to determine all non-negative solutions of a certain class of diophantine equations of degree 2 which we call -equations. The difficulties arising when attempting to solve equations in members of the class are discussed, and we present arguments that strongly suggest that unification in these semigroups is undecidable.  相似文献   

3.
An imaging system with a single effective viewpoint is called a central projection system. The conventional perspective camera is an example of central projection system. A catadioptric realization of omnidirectional vision combines reflective surfaces with lenses. Catadioptric systems with an unique projection center are also examples of central projection systems. Whenever an image is acquired, points in 3D space are mapped into points in the 2D image plane. The image formation process represents a transformation from 3 to 2, and mathematical models can be used to describe it. This paper discusses the definition of world coordinate systems that simplify the modeling of general central projection imaging. We show that an adequate choice of the world coordinate reference system can be highly advantageous. Such a choice does not imply that new information will be available in the images. Instead the geometric transformations will be represented in a common and more compact framework, while simultaneously enabling newer insights. The first part of the paper focuses on static imaging systems that include both perspective cameras and catadioptric systems. A systematic approach to select the world reference frame is presented. In particular we derive coordinate systems that satisfy two differential constraints (the compactness and the decoupling constraints). These coordinate systems have several advantages for the representation of the transformations between the 3D world and the image plane. The second part of the paper applies the derived mathematical framework to active tracking of moving targets. In applications of visual control of motion the relationship between motion in the scene and image motion must be established. In the case of active tracking of moving targets these relationships become more complex due to camera motion. Suitable world coordinate reference systems are defined for three distinct situations: perspective camera with planar translation motion, perspective camera with pan and tilt rotation motion, and catadioptric imaging system rotating around an axis going through the effective viewpoint and the camera center. Position and velocity equations relating image motion, camera motion and target 3D motion are derived and discussed. Control laws to perform active tracking of moving targets using visual information are established.  相似文献   

4.
Observability of 3D Motion   总被引:2,自引:2,他引:0  
This paper examines the inherent difficulties in observing 3D rigid motion from image sequences. It does so without considering a particular estimator. Instead, it presents a statistical analysis of all the possible computational models which can be used for estimating 3D motion from an image sequence. These computational models are classified according to the mathematical constraints that they employ and the characteristics of the imaging sensor (restricted field of view and full field of view). Regarding the mathematical constraints, there exist two principles relating a sequence of images taken by a moving camera. One is the epipolar constraint, applied to motion fields, and the other the positive depth constraint, applied to normal flow fields. 3D motion estimation amounts to optimizing these constraints over the image. A statistical modeling of these constraints leads to functions which are studied with regard to their topographic structure, specifically as regards the errors in the 3D motion parameters at the places representing the minima of the functions. For conventional video cameras possessing a restricted field of view, the analysis shows that for algorithms in both classes which estimate all motion parameters simultaneously, the obtained solution has an error such that the projections of the translational and rotational errors on the image plane are perpendicular to each other. Furthermore, the estimated projection of the translation on the image lies on a line through the origin and the projection of the real translation. The situation is different for a camera with a full (360 degree) field of view (achieved by a panoramic sensor or by a system of conventional cameras). In this case, at the locations of the minima of the above two functions, either the translational or the rotational error becomes zero, while in the case of a restricted field of view both errors are non-zero. Although some ambiguities still remain in the full field of view case, the implication is that visual navigation tasks, such as visual servoing, involving 3D motion estimation are easier to solve by employing panoramic vision. Also, the analysis makes it possible to compare properties of algorithms that first estimate the translation and on the basis of the translational result estimate the rotation, algorithms that do the opposite, and algorithms that estimate all motion parameters simultaneously, thus providing a sound framework for the observability of 3D motion. Finally, the introduced framework points to new avenues for studying the stability of image-based servoing schemes.  相似文献   

5.
I discuss the attitude of Jewish law sources from the 2nd–:5th centuries to the imprecision of measurement. I review a problem that the Talmud refers to, somewhat obscurely, as impossible reduction. This problem arises when a legal rule specifies an object by referring to a maximized (or minimized) measurement function, e.g., when a rule applies to the largest part of a divided whole, or to the first incidence that occurs, etc. A problem that is often mentioned is whether there might be hypothetical situations involving more than one maximal (or minimal) value of the relevant measurement and, given such situations, what is the pertinent legal rule. Presumption of simultaneous occurrences or equally measured values are also a source of embarrassment to modern legal systems, in situations exemplified in the paper, where law determines a preference based on measured values. I contend that the Talmudic sources discussing the problem of impossible reduction were guided by primitive insights compatible with fuzzy logic presentation of the inevitable uncertainty involved in measurement. I maintain that fuzzy models of data are compatible with a positivistic epistemology, which refuses to assume any precision in the extra-conscious world that may not be captured by observation and measurement. I therefore propose this view as the preferred interpretation of the Talmudic notion of impossible reduction. Attributing a fuzzy world view to the Talmudic authorities is meant not only to increase our understanding of the Talmud but, in so doing, also to demonstrate that fuzzy notions are entrenched in our practical reasoning. If Talmudic sages did indeed conceive the results of measurements in terms of fuzzy numbers, then equality between the results of measurements had to be more complicated than crisp equations. The problem of impossible reduction could lie in fuzzy sets with an empty core or whose membership functions were only partly congruent. Reduction is impossible may thus be reconstructed as there is no core to the intersection of two measures. I describe Dirichlet maps for fuzzy measurements of distance as a rough partition of the universe, where for any region A there may be a non-empty set of - _A (upper approximation minus lower approximation), where the problem of impossible reduction applies. This model may easily be combined with probabilistic extention. The possibility of adopting practical decision standards based on -cuts (and therefore applying interval analysis to fuzzy equations) is discussed in this context. I propose to characterize the uncertainty that was presumably capped by the old sages as U-uncertainty, defined, for a non-empty fuzzy set A on the set of real numbers, whose -cuts are intervals of real numbers, as U(A) = 1/h(A) 0 h(A) log [1+(A)]d, where h(A) is the largest membership value obtained by any element of A and (A) is the measure of the -cut of A defined by the Lebesge integral of its characteristic function.  相似文献   

6.
This report discusses the capability of an associative memory to search some useful data bases. The report utilizes a simplified cell and a collection of assembler language instructions to show how sets and trees can be searched in the memory. An OR rail and an EXCLUSIVE-OR rail are discussed in relation to their use to search-ordered and unordered sets, strings, and tree data structures. Linked data structures are also discussed. This report is oriented toward the software aspects of the associative memory to lead to further research in the design of high-level languages that utilize the capability of the rails.  相似文献   

7.
This paper considers the problem of quantifying literary style and looks at several variables which may be used as stylistic fingerprints of a writer. A review of work done on the statistical analysis of change over time in literary style is then presented, followed by a look at a specific application area, the authorship of Biblical texts.David Holmes is a Principal Lecturer in Statistics at the University of the West of England, Bristol with specific responsibility for co-ordinating the research programmes in the Department of Mathematical Sciences. He has taught literary style analysis to humanities students since 1983 and has published articles on the statistical analysis of literary style in theJournal of the Royal Statistical Society, History and Computing, andLiterary and Linguistic Computing. He presented papers at the ACH/ALLC conferences in 1991 and 1993.  相似文献   

8.
An autonomous underwater robot named Twin-Burger was developed as a versatile test bed to establish the techniques which realize intelligent robot behaviors. The robot was designed to have necessary functions for complex tasks including cooperative task execution with other robots and divers. The first robot Twin-Burger I was completed and launched in November 1992. This paper describes hardware and software systems of the robot. Motion of the robot is controlled by sliding controllers based on simplified equations of motion which are derived from system identification experiments. Tank tests proved that the robot was able to cruise along a commanded path as a sequence of control actions generated by the sliding controllers. The Distributed Vehicle Management Architecture (DVMA) is applied to the robot as an architecture for the control software. Mission execution experiments shows that the Twin-Burger behaves appropriately according to the mission and environmental conditions.  相似文献   

9.
A concurrent processing algorithm is developed for materially nonlinear stability analysis of imperfect columns with biaxial partial rotational end restraints. The algorithm for solving the governing nonlinear ordinary differential equations is implemented on a multiprocessor computer called the finite element machine, developed at the NASA Langley Research Center. Numerical results are obtained on up to nine concurrent processors. A substantial computational gain is achieved in using the parallel processing approach.  相似文献   

10.
The design of the database is crucial to the process of designing almost any Information System (IS) and involves two clearly identifiable key concepts: schema and data model, the latter allowing us to define the former. Nevertheless, the term model is commonly applied indistinctly to both, the confusion arising from the fact that in Software Engineering (SE), unlike in formal or empirical sciences, the notion of model has a double meaning of which we are not always aware. If we take our idea of model directly from empirical sciences, then the schema of a database would actually be a model, whereas the data model would be a set of tools allowing us to define such a schema.The present paper discusses the meaning of model in the area of Software Engineering from a philosophical point of view, an important topic for the confusion arising directly affects other debates where model is a key concept. We would also suggest that the need for a philosophical discussion on the concept of data model is a further argument in favour of institutionalizing a new area of knowledge, which could be called: Philosophy of Engineering.  相似文献   

11.
Some robotic tasks usually achieved through motion control – trajectory tracking control – can be also well performed by resorting to path control philosophy. This is the case for applications where motion coordination among the robot joints is more important than joint tracking of a timed desired reference. This paper illustrates this concept by means of two academic case studies – theory and experiments – using a two degrees-of-freedom direct-drive revolute arm.  相似文献   

12.
The Shakespeare Clinic has developed 51 computer tests of Shakespeare play authorship and 14 of poem authorship, and applied them to 37 claimed true Shakespeares, to 27 plays of the Shakespeare Apocrypha, and to several poems of unknown or disputed authorship. No claimant, and none of the apocryphal plays or poems, matched Shakespeare. Two plays and one poem from the Shakespeare Canon,Titus Andronicus, Henry VI, Part 3, and A Lover's Complaint, do not match the others.Ward Elliott is the Burnet C. Wohlford Professor of American Political Institutions at Claremont McKenna College. He is interested in, and has published in, almost everything,including politics, pollution, transportation, smog and Shakespeare.Robert J. Valenza is W.M. Keck Professor of Mathematics and Computer Science at Claremont McKenna College. He has written research articles in mathematics and metaphysics, as well as stylometrics. He is author ofLinear Algebra: An Introduction to Abstract Mathematics (Springer-Verlag, 1993).  相似文献   

13.
This paper presents a detailed study of Eurotra Machine Translation engines, namely the mainstream Eurotra software known as the E-Framework, and two unofficial spin-offs – the C,A,T and Relaxed Compositionality translator notations – with regard to how these systems handle hard cases, and in particular their ability to handle combinations of such problems. In the C,A,T translator notation, some cases of complex transfer are wild, meaning roughly that they interact badly when presented with other complex cases in the same sentence. The effect of this is that each combination of a wild case and another complex case needs ad hoc treatment. The E-Framework is the same as the C,A,T notation in this respect. In general, the E-Framework is equivalent to the C,A,T notation for the task of transfer. The Relaxed Compositionality translator notation is able to handle each wild case (bar one exception) with a single rule even where it appears in the same sentence as other complex cases.  相似文献   

14.
The number of virtual connections in the nodal space of an ATM network of arbitrary structure and topology is computed by a method based on a new concept—a covering domain having a concrete physical meaning. The method is based on a network information sources—boundary switches model developed for an ATM transfer network by the entropy approach. Computations involve the solution of systems of linear equations. The optimization model used to compute the number of virtual connections in a many-category traffic in an ATM network component is useful in estimating the resource of nodal equipment and communication channels. The variable parameters of the model are the transmission bands for different traffic categories.  相似文献   

15.
We are investigating how people move from individual to group work through the use of both personal digital assistants (PDAs) and a shared public display. Our scenario of this work covers the following activities. First, mobile individuals can create personal notes on their PDAs. Second, when individuals meet in real time, they can selectively publicise notes by moving them to a shared public display. Third, the group can manipulate personal and public items in real time through both PDAs and the shared public display, where the notes contained on both PDAs and public display are automatically synchronised. Finally, people leave a meeting with a common record of their activity. We describe our SharedNotes system that illustrates how people move through this scenario. We also highlight a variety of problematic design issues that result from having different devices and from having the system enforce a rigid distinction between personal and public information.  相似文献   

16.
In this paper we show, for the first time, how Radial Basis Function (RBF) network techniques can be used to explore questions surrounding authorship of historic documents. The paper illustrates the technical and practical aspects of RBF's, using data extracted from works written in the early 17th century by William Shakespeare and his contemporary John Fletcher. We also present benchmark comparisons with other standard techniques for contrast and comparison.David Lowe is Professor of Neural Computing at Aston University, UK. His research interests span from the theoretical aspects of dynamical systems theory and statistical pattern processing, to a wide range of application domains, from financial market analysis (Novel Exploitation of Neural Network Methods in Financial Markets, invited paper,World Conference on Computational Intelligence, vol. VI, pp. 3623–28, 1994) to the artificial nose (Novel Topographic Nonlinear Feature Extraction using Radial Basis Functions for Concentration Coding in the Artificial Nose,3 rd IEE International Conference on Artificial Neural Networks, pp. 95–99, Conference Publication number 372, The Institute of Electrical Engineers, 1993).Robert Matthews is a visiting research fellow at Aston University. His research interests include probability, number theory and astronomy. His recent paper inNature (vol. 374, pp. 681–82, 1995) somehow managed to combine all three.  相似文献   

17.
In this paper, we consider the linear interval tolerance problem, which consists of finding the largest interval vector included in ([A], [b]) = {x R n | A [A], b [b], Ax = b}. We describe two different polyhedrons that represent subsets of all possible interval vectors in ([A], [b]), and we provide a new definition of the optimality of an interval vector included in ([A], [b]). Finally, we show how the Simplex algorithm can be applied to find an optimal interval vector in ([A], [b]).  相似文献   

18.
Summary Over many familiar datatypes the notion of computable coincides with the notion of flowchartable. It is also known that flowcharts are not a universal programming formalism over arbitrary datatypes, in the sense that there are datatypes over which not all computable functions are flowchartable. In this paper we consider various extensions and restrictions of the basic formalism of flowcharts, and then for every such formalism, we characterize the datatypes over which the computable functions are exactly the functions programmable in this formalism. We say that a function is computable over a datatype if it is effective relative to the primitive operations and relations of the datatype.  相似文献   

19.
The classic approach to structure from motion entails a clear separation between motion estimation and structure estimation and between two-dimensional (2D) and three-dimensional (3D) information. For the recovery of the rigid transformation between different views only 2D image measurements are used. To have available enough information, most existing techniques are based on the intermediate computation of optical flow which, however, poses a problem at the locations of depth discontinuities. If we knew where depth discontinuities were, we could (using a multitude of approaches based on smoothness constraints) accurately estimate flow values for image patches corresponding to smooth scene patches; but to know the discontinuities requires solving the structure from motion problem first. This paper introduces a novel approach to structure from motion which addresses the processes of smoothing, 3D motion and structure estimation in a synergistic manner. It provides an algorithm for estimating the transformation between two views obtained by either a calibrated or uncalibrated camera. The results of the estimation are then utilized to perform a reconstruction of the scene from a short sequence of images.The technique is based on constraints on image derivatives which involve the 3D motion and shape of the scene, leading to a geometric and statistical estimation problem. The interaction between 3D motion and shape allows us to estimate the 3D motion while at the same time segmenting the scene. If we use a wrong 3D motion estimate to compute depth, we obtain a distorted version of the depth function. The distortion, however, is such that the worse the motion estimate, the more likely we are to obtain depth estimates that vary locally more than the correct ones. Since local variability of depth is due either to the existence of a discontinuity or to a wrong 3D motion estimate, being able to differentiate between these two cases provides the correct motion, which yields the least varying estimated depth as well as the image locations of scene discontinuities. We analyze the new constraints, show their relationship to the minimization of the epipolar constraint, and present experimental results using real image sequences that indicate the robustness of the method.  相似文献   

20.
A technique to model and to verify distributed algorithms is suggested. This technique (based on Petri nets) reduces the modelling and analysis effort to a reasonable level. The paper outlines the technique using the example of a typical network algorithm, theecho algorithm.Supported by the DFG-projects Verteilte Algorithmen and Konsensalgorithmen  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号