首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We study the class of pseudo-BL-algebras whose every maximal filter is normal. We present an equational base for this class and we extend these results for the class of basic pseudo hoops with fixed strong unit. This is a continuation of the research from Botur et al. (Soft Comput 16:635–644, doi: 10.1007/s00500-011-0763-7, 2012).  相似文献   

2.
The objective of this paper is to focus on one of the “building blocks” of additive manufacturing technologies, namely selective laser-processing of particle-functionalized materials. Following a series of work in Zohdi (Int J Numer Methods Eng 53:1511–1532, 2002; Philos Trans R Soc Math Phys Eng Sci 361(1806):1021–1043, 2003; Comput Methods Appl Mech Eng 193(6–8):679–699, 2004; Comput Methods Appl Mech Eng 196:3927–3950, 2007; Int J Numer Methods Eng 76:1250–1279, 2008; Comput Methods Appl Mech Eng 199:79–101, 2010; Arch Comput Methods Eng 1–17. doi: 10.1007/s11831-013-9092-6, 2013; Comput Mech Eng Sci 98(3):261–277, 2014; Comput Mech 54:171–191, 2014; J Manuf Sci Eng ASME doi: 10.1115/1.4029327, 2015; CIRP J Manuf Sci Technol 10:77–83, 2015; Comput Mech 56:613–630, 2015; Introduction to computational micromechanics. Springer, Berlin, 2008; Introduction to the modeling and simulation of particulate flows. SIAM (Society for Industrial and Applied Mathematics), Philadelphia, 2007; Electromagnetic properties of multiphase dielectrics: a primer on modeling, theory and computation. Springer, Berlin, 2012), a laser-penetration model, in conjunction with a Finite Difference Time Domain Method using an immersed microstructure method, is developed. Because optical, thermal and mechanical multifield coupling is present, a recursive, staggered, temporally-adaptive scheme is developed to resolve the internal microstructural fields. The time step adaptation allows the numerical scheme to iteratively resolve the changing physical fields by refining the time-steps during phases of the process when the system is undergoing large changes on a relatively small time-scale and can also enlarge the time-steps when the processes are relatively slow. The spatial discretization grids are uniform and dense enough to capture fine-scale changes in the fields. The microstructure is embedded into the spatial discretization and the regular grid allows one to generate a matrix-free iterative formulation which is amenable to rapid computation, with minimal memory requirements, making it ideal for laptop computation. Numerical examples are provided to illustrate the modeling and simulation approach, which by design, is straightforward to computationally implement, in order to be easily utilized by researchers in the field. More advanced conduction models, based on thermal-relaxation, which are a key feature of fast-pulsing laser technologies, are also discussed.  相似文献   

3.
Robust and accurate detection of the pupil position is a key building block for head-mounted eye tracking and prerequisite for applications on top, such as gaze-based human–computer interaction or attention analysis. Despite a large body of work, detecting the pupil in images recorded under real-world conditions is challenging given significant variability in the eye appearance (e.g., illumination, reflections, occlusions, etc.), individual differences in eye physiology, as well as other sources of noise, such as contact lenses or make-up. In this paper we review six state-of-the-art pupil detection methods, namely ElSe (Fuhl et al. in Proceedings of the ninth biennial ACM symposium on eye tracking research & applications, ACM. New York, NY, USA, pp 123–130, 2016), ExCuSe (Fuhl et al. in Computer analysis of images and patterns. Springer, New York, pp 39–51, 2015), Pupil Labs (Kassner et al. in Adjunct proceedings of the 2014 ACM international joint conference on pervasive and ubiquitous computing (UbiComp), pp 1151–1160, 2014. doi: 10.1145/2638728.2641695), SET (Javadi et al. in Front Neuroeng 8, 2015), Starburst (Li et al. in Computer vision and pattern recognition-workshops, 2005. IEEE Computer society conference on CVPR workshops. IEEE, pp 79–79, 2005), and ?wirski (?wirski et al. in Proceedings of the symposium on eye tracking research and applications (ETRA). ACM, pp 173–176, 2012. doi: 10.1145/2168556.2168585). We compare their performance on a large-scale data set consisting of 225,569 annotated eye images taken from four publicly available data sets. Our experimental results show that the algorithm ElSe (Fuhl et al. 2016) outperforms other pupil detection methods by a large margin, offering thus robust and accurate pupil positions on challenging everyday eye images.  相似文献   

4.
Recently, Gao et al. (J Time Ser Anal, 2016 doi:  10.1111/jtsa.12178) propose a new estimation method for dynamic panel probit model with random effects, where the theoretical properties of estimator are derived. In this paper, we extend their estimation method to the \(T\ge 3\) case, and some Monte Carlo simulations are presented to illustrate the extended estimator.  相似文献   

5.
We show how to realize two-factor authentication for a Bitcoin wallet. To do so, we explain how to employ an ECDSA adaption of the two-party signature protocol by MacKenzie and Reiter (Int J Inf Secur 2(3–4):218–239, 2004. doi: 10.1007/s10207-004-0041-0) in the context of Bitcoin and present a prototypic implementation of a Bitcoin wallet that offers both: two-factor authentication and verification over a separate channel. Since we use a smart phone as the second authentication factor, our solution can be used with hardware already available to most users and the user experience is quite similar to the existing online banking authentication methods.  相似文献   

6.
State-based formal methods [e.g. Event-B/RODIN (Abrial in Modeling in Event-B—system and software engineering. Cambridge University Press, Cambridge, 2010; Abrial et al. in Int J Softw Tools Technol Transf (STTT) 12(6):447–466, 2010)] for critical system development and verification are now well established, with track records including tool support and industrial applications. The focus of proof-based verification, in particular, is on safety properties. Liveness properties, which guarantee eventual, or converging computations of some requirements, are less well dealt with. Inductive reasoning about liveness is not explicitly supported. Liveness proofs are often complex and expensive, requiring high-skill levels on the part of the verification engineer. Fairness-based temporal logic approaches have been proposed to address this, e.g. TLA Lamport (ACM Trans Program Lang Syst 16(3):872–923, 1994) and that of Manna and Pnueli (Temporal verification of reactive systems—safety. Springer, New York, 1995). We contribute to this technology need by proposing a fairness-based method integrating temporal and first-order logic, proof and tools for modelling and verification of safety and liveness properties. The method is based on an integration of Event-B and TLA. Building on our previous work (Méry and Poppleton in Integrated formal methods, 10th international conference, IFM 2013, Turku, Finland, pp 208–222, 2013. doi: 10.1007/978-3-642-38613-8_15), we present the method via three example population protocols Angluin et al. (Distrib Comput 18(4):235–253, 2006). These were proposed as a theoretical framework for computability reasoning about Wireless Sensor Network and Mobile Ad-Hoc Network algorithms. Our examples present typical liveness and convergence requirements. We prove convergence results for the examples by integrated modelling and proof with Event-B/RODIN and TLA. We exploit existing proof rules, define and apply three new proof rules; soundness proofs are also provided. During the process we observe certain repeating patterns in the proofs. These are easily identified and reused because of the explicit nature of the reasoning.  相似文献   

7.
An extended QR algorithm specifically tailored for Hamiltonian matrices is presented. The algorithm generalizes the customary Hamiltonian QR algorithm with additional freedom in choosing between various possible extended Hamiltonian Hessenberg forms. We introduced in Ferranti et al. (Calcolo, 2015. doi: 10.1007/s10092-016-0192-1) an algorithm to transform certain Hamiltonian matrices to such forms. Whereas the convergence of the classical QR algorithm is related to classical Krylov subspaces, convergence in the extended case links to extended Krylov subspaces, resulting in a greater flexibility, and possible enhanced convergence behavior. Details on the implementation, covering the bidirectional chasing and the bulge exchange based on rotations are presented. The numerical experiments reveal that the convergence depends on the selected extended forms and illustrate the validity of the approach.  相似文献   

8.
The security provisioning of increasing healthcare data is of critical importance. The e-health clouds can be seen as a move towards an efficient management of such a big volume of healthcare data. Many schemes have been presented to bring more security and privacy along with efficiency, in the handling of data for booming e-health industry. Recently, in this connection, Jiang et al. (J Supercomput 1–24 doi: 10.1007/s11227-015-1610-x, 2016) presented a three-factor authentication protocol for e-health clouds. In this letter, we identify a serious flaw in the mutual authentication phase of the scheme, since an adversary may launch a denial-of-service attack (DoS) against the service providing server. Finally, we suggest a modification in the scheme to overcome the DoS attack.  相似文献   

9.
Foreground detection or moving object detection is a fundamental and critical task in video surveillance systems. Background subtraction using Gaussian Mixture Model (GMM) is a widely used approach for foreground detection. Many improvements have been proposed over the original GMM developed by Stauffer and Grimson (IEEE Computer Society conference on computer vision and pattern recognition, vol 2, Los Alamitos, pp 246–252, 1999. doi: 10.1109/CVPR.1999.784637) to accommodate various challenges experienced in video surveillance systems. This paper presents a review of various background subtraction algorithms based on GMM and compares them on the basis of quantitative evaluation metrics. Their performance analysis is also presented to determine the most appropriate background subtraction algorithm for the specific application or scenario of video surveillance systems.  相似文献   

10.
What does it take to implement a computer? Answers to this question have often focused on what it takes for a physical system to implement an abstract machine. As Joslin (Minds Mach 16:29–41, 2006) observes, this approach neglects cases of software implementation—cases where one machine implements another by running a program. These cases, Joslin argues, highlight serious problems for mapping accounts of computer implementation—accounts that require a mapping between elements of a physical system and elements of an abstract machine. The source of these problems is the complexity introduced by common design features of ordinary computers, features that would be relevant to any real-world software implementation (e.g., virtual memory). While Joslin is focused on contemporary views, his discussion also suggests a counterexample to recent mapping accounts which hold that genuine implementation requires simple mappings (Millhouse in Br J Philos Sci, 2017.  https://doi.org/10.1093/bjps/axx046; Wallace in The emergent multiverse, Oxford University Press, Oxford, 2014). In this paper, I begin by clarifying the nature of software implementation and disentangling it from closely related phenomena like emulation and simulation. Next, I argue that Joslin overstates the degree of complexity involved in his target cases and that these cases may actually give us reasons to favor simplicity-based criteria over relevant alternatives. Finally, I propose a novel problem for simplicity-based criteria and suggest a tentative solution.  相似文献   

11.
This paper focuses on two variants of the Milstein scheme, namely the split-step backward Milstein method and a newly proposed projected Milstein scheme, applied to stochastic differential equations which satisfy a global monotonicity condition. In particular, our assumptions include equations with super-linearly growing drift and diffusion coefficient functions and we show that both schemes are mean-square convergent of order 1. Our analysis of the error of convergence with respect to the mean-square norm relies on the notion of stochastic C-stability and B-consistency, which was set up and applied to Euler-type schemes in Beyn et al. (J Sci Comput 67(3):955–987, 2016. doi: 10.1007/s10915-015-0114-4). As a direct consequence we also obtain strong order 1 convergence results for the split-step backward Euler method and the projected Euler–Maruyama scheme in the case of stochastic differential equations with additive noise. Our theoretical results are illustrated in a series of numerical experiments.  相似文献   

12.
In this paper, we introduce a novel model that restores a color image from a grayscale image with color values given in small regions. The model is based on the idea of the generalization of the low dimensional manifold model (Shi et al. in J Sci Comput, 2017.  https://doi.org/10.1007/s10915-017-0549-x) and the YCbCr color space. It involves two prior terms, a weighted nonlocal Laplacian (WNLL) and a weighted total variation (WTV). The WNLL allows regions without color information to be interpolated smoothly from given sparse color data, while the WTV assists to inhibit the diffusion of color values across edges. To cope with various types of sampled data, we introduce an updating rule for the weight function in the WNLL. Furthermore, we present an efficient iterative algorithm for solving the proposed model. Lastly, numerical experiments validate the superior performance of the proposed model over that of the other state-of-the-art models.  相似文献   

13.
We present the RST Signalling Corpus (Das et al. in RST signalling corpus, LDC2015T10. https://catalog.ldc.upenn.edu/LDC2015T10, 2015), a corpus annotated for signals of coherence relations. The corpus is developed over the RST Discourse Treebank (Carlson et al. in RST Discourse Treebank, LDC2002T07. https://catalog.ldc.upenn.edu/LDC2002T07, 2002) which is annotated for coherence relations. In the RST Signalling Corpus, these relations are further annotated with signalling information. The corpus includes annotation not only for discourse markers which are considered to be the most typical (or sometimes the only type of) signals in discourse, but also for a wide array of other signals such as reference, lexical, semantic, syntactic, graphical and genre features as potential indicators of coherence relations. We describe the research underlying the development of the corpus and the annotation process, and provide details of the corpus. We also present the results of an inter-annotator agreement study, illustrating the validity and reproducibility of the annotation. The corpus is available through the Linguistic Data Consortium, and can be used to investigate the psycholinguistic mechanisms behind the interpretation of relations through signalling, and also to develop discourse-specific computational systems such as discourse parsing applications.  相似文献   

14.
We have investigated the compatibility of some organic solvents commonly used in electrochemistry with microfluidic channels based on poly(dimethylsiloxane) (PDMS) and compared the stability of electrochemical measurements over several hours with how much PDMS swells when immersed in these solvents. Lee et al. (Anal Chem 75: 6544–6554. doi: 10.1021/ac0346712, 2003) have shown that there is a good correlation between swelling of PDMS and the solubility parameter (δ H) of the various solvents and suggested that δ H can function as an indication of PDMS compatibility. We show that solvents with a very high swelling ratio can give stable voltammetry over several hours, and thus, we do not find that swelling is a good measure for compatibility with PDMS in electrochemical experiments.  相似文献   

15.
To generate long-distance shared quantum correlations (QCs) for information processing in future quantum networks, recently we proposed the concept of QC repeater and its kernel technique named QC swapping. Besides, we extensively studied the QC swapping between two simple QC resources (i.e., a pair of Werner states) with four different methods to quantify QCs (Xie et al. in Quantum Inf Process 14:653–679, 2015). In this paper, we continue to treat the same issue by employing other three different methods associated with relative entropies, i.e., the MPSVW method (Modi et al. in Phys Rev Lett 104:080501, 2010), the Zhang method (arXiv:1011.4333 [quant-ph]) and the RS method (Rulli and Sarandy in Phys Rev A 84:042109, 2011). We first derive analytic expressions of all QCs which occur during the swapping process and then reveal their properties about monotonicity and threshold. Importantly, we find that a long-distance shared QC can be generated from two short-distance ones via QC swapping indeed. In addition, we simply compare our present results with our previous ones.  相似文献   

16.
In the era of bigdata, with a massive set of digital information of unprecedented volumes being collected and/or produced in several application domains, it becomes more and more difficult to manage and query large data repositories. In the framework of the PetaSky project (http://com.isima.fr/Petasky), we focus on the problem of managing scientific data in the field of cosmology. The data we consider are those of the LSST project (http://www.lsst.org/). The overall size of the database that will be produced is expected to exceed 60 PB (Lsst data challenge handbook, 2012). In order to evaluate the performances of existing SQL On MapReduce data management systems, we conducted extensive experiments by using data and queries from the area of cosmology. The goal of this work is to report on the ability of such systems to support large scale declarative queries. We mainly investigated the impact of data partitioning, indexing and compression on query execution performances.  相似文献   

17.
This paper analyses online competition between private labels and national brands. Purchase data from a grocery retailer operating both on and offline are used to compute two measures of competition (intrinsic loyalty and conquesting power) for both the private label, and what this paper terms the “reference brand” (a compound of the different national brands within a category), in 36 product categories. The results show that the competitive position of the private label, relative to that of the reference brand, varies across categories and across channels. Using the framework devised by Steenkamp and Dekimpe (Long Range Plan 30(6):917–930, 1997.  https://doi.org/10.1016/S0024-6301(97)00077-0) we combine the two computed measures of competition, and classify the private label as a miser, a giant, a fighter or an artisan in each channel and category. The results show: (1) that private labels significantly improve their competitive position online; and (2) that this improvement is not equal across all categories.  相似文献   

18.
In this work, we propose a measure for the quantum discord of indistinguishable particles, based on the definition of entanglement of particles given in Wiseman and Vaccaro (Phys Rev Lett 91:097902, 2003. doi: 10.1103/PhysRevLett.91.097902). This discord of particles is then used to evaluate the quantum correlations in a system of two identical bosons (fermions), where the particles perform a quantum random walk described by the Hubbard Hamiltonian in a 1D lattice. The dynamics of the particles is either unperturbed or subject to a classical environmental noise—such as random telegraph, pink or brown noise. The observed results are consistent with those for the entanglement of particles, and we observe that on-site interaction between particles have an important protective effect on correlations against the decoherence of the system.  相似文献   

19.
Chemical reaction network has been a model of interest to both theoretical and applied computer scientists, and there has been concern about its physical-realisticity which calls for study on the atomic property of chemical reaction networks. Informally, a chemical reaction network is “atomic” if each reaction may be interpreted as the rearrangement of indivisible units of matter. There are several reasonable definitions formalizing this idea. We investigate the computational complexity of deciding whether a given network is atomic according to each of these definitions. Primitive atomic, which requires each reaction to preserve the total number of atoms, is shown to be equivalent to mass conservation. Since it is known that it can be decided in polynomial time whether a given chemical reaction network is mass-conserving (Mayr and Weihmann, in: International conference on applications and theory of petri nets and concurrency, Springer, New York, 2014), the equivalence we show gives an efficient algorithm to decide primitive atomicity. Subset atomic further requires all atoms be species, so intuitively this type of network is endowed with a “better” property than primitive atomic (i.e. mass conserving) ones in the sense that the atoms are not just abstract indivisible units, but also actual participants of reactions. We show that deciding if a network is subset atomic is in \({\mathsf{NP}}\), and “whether a network is subset atomic with respect to a given atom set” is strongly \({\mathsf{NP}}\)-\({\mathsf {complete}}\). Reachably atomic, studied by Adleman et al. (On the mathematics of the law of mass action, Springer, Dordrecht, 2014.  https://doi.org/10.1007/978-94-017-9041-3_1), and Gopalkrishnan (2016), further requires that each species has a sequence of reactions splitting it into its constituent atoms. Using a combinatorial argument, we show that there is a polynomial-time algorithm to decide whether a given network is reachably atomic, improving upon the result of Adleman et al. that the problem is decidable. We show that the reachability problem for reachably atomic networks is \({\mathsf {PSPACE}}\)-\({\mathsf {complete}}\). Finally, we demonstrate equivalence relationships between our definitions and some cases of an existing definition of atomicity due to Gnacadja (J Math Chem 49(10):2137, 2011).  相似文献   

20.
As the volume of data generated each day continues to increase, more and more interest is put into Big Data algorithms and the insight they provide.? Since these analyses require a substantial amount of resources, including physical machines, power, and time, reliable execution of the algorithms becomes critical. This paper analyzes the error resilience of a select group of popular Big Data algorithms and shows how they can effectively be made more fault-tolerant. Using KULFI (http://github.com/quadpixels/kulfi, 2013) and the LLVM (Proceedings of the 2004 international symposium on code generation and optimization (CGO 2004), San Jose, CA, USA, 2004) compiler for compilation allows injection of artificial soft faults throughout these algorithms, giving a thorough analysis of how faults in different locations can affect the outcome of the program. This information is then used to help guide incorporating fault tolerance mechanisms into the program, making them as impervious as possible to soft faults.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号