首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 902 毫秒
1.
A surface acoustic wave (SAW) device has been reported as a micro fluid device such as a pump of a water droplet so far (Renaudin et al. in μTAS, pp 599–601, 2004, 1:551–553, 2005; Sritharan et al. in Appl Phys Lett 88:054102, 2006; Wixforth in Anal Bioanal Chem 379:982–991, 2004; Yamamoto et al. in μTAS, pp 1072–1074, 2005). The SAW device is an interdigital transducer (IDT) fabricated on the piezoelectric substrate only. IDTs are advantageous in terms of integration, miniaturization, free position setting on the substrate and simple fabrication process because of their simple structure. Therefore, the SAW device is easy to apply to integrated chemical system such as lab-on-a-chip. The SAW drives the liquid homogenously by the transmission of surface vibrations of the substrate. Thus, both ends of the channel for pressure loading are not necessary to pump the liquid by using the SAW. The SAW can pump the liquid in both of a closed channel and an opened channel, although continuous flow pumping using an external pump is difficult for no loading pressure in the closed fluid channel. In this paper, we proposed and fabricated the micro fluid devices combined cyclical fluid channel and SAW actuator for liquid pumping. This device is fabricated on a piezoelectric substrate (LiNbO3) with UV photolithography and wet etching. Structure material of cyclical fluid channel is epoxy photoresist SU-8 100. Then, it is demonstrated to continuous flow pumping and reciprocal flow pumping in the channel. As a result of optimization of a SAW pump’s structural parameter, 32.5, 71.3 and 108.0 mm/s are achieved in the 500, 1,000 and 2,000 μm channel width as a maximum flow velocity.  相似文献   

2.
This paper investigates the problem of the pth moment exponential stability for a class of stochastic recurrent neural networks with Markovian jump parameters. With the help of Lyapunov function, stochastic analysis technique, generalized Halanay inequality and Hardy inequality, some novel sufficient conditions on the pth moment exponential stability of the considered system are derived. The results obtained in this paper are completely new and complement and improve some of the previously known results (Liao and Mao, Stoch Anal Appl, 14:165–185, 1996; Wan and Sun, Phys Lett A, 343:306–318, 2005; Hu et al., Chao Solitions Fractals, 27:1006–1010, 2006; Sun and Cao, Nonlinear Anal Real, 8:1171–1185, 2007; Huang et al., Inf Sci, 178:2194–2203, 2008; Wang et al., Phys Lett A, 356:346–352, 2006; Peng and Liu, Neural Comput Appl, 20:543–547, 2011). Moreover, a numerical example is also provided to demonstrate the effectiveness and applicability of the theoretical results.  相似文献   

3.
We present several variants of the sunflower conjecture of Erd?s & Rado (J Lond Math Soc 35:85–90, 1960) and discuss the relations among them. We then show that two of these conjectures (if true) imply negative answers to the questions of Coppersmith & Winograd (J Symb Comput 9:251–280, 1990) and Cohn et al. (2005) regarding possible approaches for obtaining fast matrix-multiplication algorithms. Specifically, we show that the Erd?s–Rado sunflower conjecture (if true) implies a negative answer to the “no three disjoint equivoluminous subsets” question of Coppersmith & Winograd (J Symb Comput 9:251–280, 1990); we also formulate a “multicolored” sunflower conjecture in ${\mathbb{Z}_3^n}$ and show that (if true) it implies a negative answer to the “strong USP” conjecture of Cohn et al. (2005) (although it does not seem to impact a second conjecture in Cohn et al. (2005) or the viability of the general group-theoretic approach). A surprising consequence of our results is that the Coppersmith–Winograd conjecture actually implies the Cohn et al. conjecture. The multicolored sunflower conjecture in ${\mathbb{Z}_3^n}$ is a strengthening of the well-known (ordinary) sunflower conjecture in ${\mathbb{Z}_3^n}$ , and we show via our connection that a construction from Cohn et al. (2005) yields a lower bound of (2.51 . . .) n on the size of the largest multicolored 3-sunflower-free set, which beats the current best-known lower bound of (2.21 . . . ) n Edel (2004) on the size of the largest 3-sunflower-free set in ${\mathbb{Z}_3^n}$ .  相似文献   

4.
Wireless sensor networks (WSNs), one of the commercial wireless mesh networks (WMNs), are envisioned to provide an effective solution for sensor-based AmI (Ambient Intelligence) systems and applications. To enable the communications between AmI sensor networks and the most popular TCP/IP networks seamlessly, the best solution model is to run TCP/IP directly on WSNs (Mulligan et al. 2009; Hui and Culler 2008; Han and Mam 2007; Kim et al. 2007; Xiaohua et al. 2004; Dunkels et al. 2004; Dunkels et al. 2004; Dunkels 2001; Dunkels et al. 2004). In this case, an IP assignment method is required to assign each sensor node a unique IP address. SIPA (Dunkels et al. 2004) is the best known IP assignment method that uses spatial relations and locations of sensor nodes to assign their IP addresses. It has been applied in Contiki (Dunkels et al. 2004), a famous WSN operating system, to support the 6LowPAN protocol. In Chang et al. (2009), we proposed the SLIPA (Scan-Line IP Assignment) algorithm to improve the assignment success rate (ASR) obtained by SIPA. SLIPA can achieve a good ASR when sensor nodes are uniformly distributed. However, if sensor nodes are deployed by other distributions, the improvements would be limited. This paper proposes a new spatial IP assignment method, called SLIPA-Q (SLIPA with equal-quantity partition), to improve SLIPA. Experiments show that, by testing the proposed method 1,000 times with 1,000 randomly deployed sensor nodes, the average ASR obtained by SLIPA-Q is over two times of that obtained by SLIPA. Under the same 88% ASR, the average numbers of sensor nodes those can be successfully assigned by SLIPA-Q, SLIPA, and SIPA are 950, 850, and 135, respectively. Comparing to previous spatial IP assignment methods, SLIPA-Q can achieve dramatic improvements in ASR for assigning IP addresses to a large set of sensor nodes.  相似文献   

5.
Wavelet frame based models for image restoration have been extensively studied for the past decade (Chan et al. in SIAM J. Sci. Comput. 24(4):1408–1432, 2003; Cai et al. in Multiscale Model. Simul. 8(2):337–369, 2009; Elad et al. in Appl. Comput. Harmon. Anal. 19(3):340–358, 2005; Starck et al. in IEEE Trans. Image Process. 14(10):1570–1582, 2005; Shen in Proceedings of the international congress of mathematicians, vol. 4, pp. 2834–2863, 2010; Dong and Shen in IAS lecture notes series, Summer program on “The mathematics of image processing”, Park City Mathematics Institute, 2010). The success of wavelet frames in image restoration is mainly due to their capability of sparsely approximating piecewise smooth functions like images. Most of the wavelet frame based models designed in the past are based on the penalization of the ? 1 norm of wavelet frame coefficients, which, under certain conditions, is the right choice, as supported by theories of compressed sensing (Candes et al. in Appl. Comput. Harmon. Anal., 2010; Candes et al. in IEEE Trans. Inf. Theory 52(2):489–509, 2006; Donoho in IEEE Trans. Inf. Theory 52:1289–1306, 2006). However, the assumptions of compressed sensing may not be satisfied in practice (e.g. for image deblurring and CT image reconstruction). Recently in Zhang et al. (UCLA CAM Report, vol. 11-32, 2011), the authors propose to penalize the ? 0 “norm” of the wavelet frame coefficients instead, and they have demonstrated significant improvements of their method over some commonly used ? 1 minimization models in terms of quality of the recovered images. In this paper, we propose a new algorithm, called the mean doubly augmented Lagrangian (MDAL) method, for ? 0 minimizations based on the classical doubly augmented Lagrangian (DAL) method (Rockafellar in Math. Oper. Res. 97–116, 1976). Our numerical experiments show that the proposed MDAL method is not only more efficient than the method proposed by Zhang et al. (UCLA CAM Report, vol. 11-32, 2011), but can also generate recovered images with even higher quality. This study reassures the feasibility of using the ? 0 “norm” for image restoration problems.  相似文献   

6.
In the NFR system, since the gap between SIL and disc is under 100 nm, the traditional servo method can not be applied, and its modification is necessary for the stable servo performance. The gap error servo method is well-known as a typical servo in the NFR system (Verschuren et al. in JJAP 44:3554–3558, 2005) for focusing direction, and pull-in procedure (Zijp et al. in IEEE Trans Magn 41(2):1042–1046, 2005) and hand-over (Lee et al. in JJAP 44(5B):3423–3426, 2005) methods are also good approaches for the stable servo in the near field. We can obtain a good servo performance using the gap error servo method, and its experimental result shows that it is consistent with the previous researches. In case of the track servo, 1-beam push pull was generally used for the traditional ODD. However, it contains a DC offset components with lens movement, and thus, we can not have a good RF signal and track error signal simultaneously. Therefore, in this paper we introduce the modified servo method, which uses gap error lens position and the successful servo result for surface disc and 1.7 μm cover layer disc. Test was performed with a 4-axis actuator, and it gives a good performance not only for gap and track servo, but also for radial and tangential tilt. Finally, we suggested the decoupling method between track-shift and tilt components using the gap error differential signal and the simple slit structure. Based on the experimental result, it was verified that slit structure can eliminate the track-shift component effectively.  相似文献   

7.
The stochastic collocation method (Babu?ka et al. in SIAM J Numer Anal 45(3):1005–1034, 2007; Nobile et al. in SIAM J Numer Anal 46(5):2411–2442, 2008a; SIAM J Numer Anal 46(5):2309–2345, 2008b; Xiu and Hesthaven in SIAM J Sci Comput 27(3):1118–1139, 2005) has recently been applied to stochastic problems that can be transformed into parametric systems. Meanwhile, the reduced basis method (Maday et al. in Comptes Rendus Mathematique 335(3):289–294, 2002; Patera and Rozza in Reduced basis approximation and a posteriori error estimation for parametrized partial differential equations Version 1.0. Copyright MIT, http://augustine.mit.edu, 2007; Rozza et al. in Arch Comput Methods Eng 15(3):229–275, 2008), primarily developed for solving parametric systems, has been recently used to deal with stochastic problems (Boyaval et al. in Comput Methods Appl Mech Eng 198(41–44):3187–3206, 2009; Arch Comput Methods Eng 17:435–454, 2010). In this work, we aim at comparing the performance of the two methods when applied to the solution of linear stochastic elliptic problems. Two important comparison criteria are considered: (1), convergence results of the approximation error; (2), computational costs for both offline construction and online evaluation. Numerical experiments are performed for problems from low dimensions $O(1)$ to moderate dimensions $O(10)$ and to high dimensions $O(100)$ . The main result stemming from our comparison is that the reduced basis method converges better in theory and faster in practice than the stochastic collocation method for smooth problems, and is more suitable for large scale and high dimensional stochastic problems when considering computational costs.  相似文献   

8.
We propose a uniform method to encode various types of trees succinctly. These families include ordered (ordinal), k-ary (cardinal), and unordered (free) trees. We will show the approach is intrinsically suitable for obtaining entropy-based encodings of trees (such as the degree-distribution entropy). Previously-existing succinct encodings of trees use ad hoc techniques to encode each particular family of trees. Additionally, the succinct encodings obtained using the uniform approach improve upon the existing succinct encodings of each family of trees; in the case of ordered trees, it simplifies the encoding while supporting the full set of navigational operations. It also simplifies the implementation of many supported operations. The approach applied to k-ary trees yields a succinct encoding that supports both cardinal-type operations (e.g. determining the child label i) as well as the full set of ordinal-type operations (e.g. reporting the number of siblings to the left of a node). Previous work on succinct encodings of k-ary trees does not support both types of operations simultaneously (Benoit et al. in Algorithmica 43(4):275–292, 2005; Raman et al. in ACM-SIAM Symposium on Discrete Algorithms (SODA), pp. 233–242, 2002). For unordered trees, the approach achieves the first succinct encoding. The approach is based on two recursive decompositions of trees into subtrees. Recursive decomposition of a structure into substructures is a common technique in succinct encodings and has even been used to encode (ordered) trees (Geary et al. in ACM Trans. Algorithms 2(4):510–534, 2006; He et al. in ICALP, pp. 509–520, 2007) and dynamic binary trees (Munro et al. in ACM-SIAM Symposium on Discrete Algorithms (SODA), pp. 529–536, 2001; Storm in Representing dynamic binary trees succinctly, Master’s thesis, 2000). The main distinction of the approach in this paper is that a tree is decomposed into subtrees in a manner that the subtrees are maximally isolated from each other. This intermediate decomposition result is interesting in its own right and has proved useful in other applications (Farzan et al. in ICALP (1), pp. 451–462, 2009; Farzan and Munro in ICALP (1), pp. 439–450, 2009; Farzan and Kamali in ICALP, 2011).  相似文献   

9.
In this paper, inspired by some types of $BL$ -algebra filters (deductive systems) introduced in Haveshki et al. (Soft Comput 10:657–664, 2006), Kondo and Dudek (Soft Comput 12:419–423, 2008) and Turunen (Arch Math Log 40:467–473, 2001), we defined residuated lattice versions of them and study them in connection with Van Gasse et al. (Inf Sci 180(16):3006–3020, 2010), Lianzhen and Kaitai (Inf Sci 177:5725–5738, 2007), Zhu and Xu (Inf Sci 180:3614–3632, 2010). Also we consider some relations between these filters and quotient residuated lattice that are constructed via these filters.  相似文献   

10.
We revisit from a fairness point of view the problem of online load balancing in the restricted assignment model and the 1-∞ model. We consider both a job-centric and a machine-centric view of fairness, as proposed by Goel et al. (In: Symposium on discrete algorithms, pp. 384–390, 2005). These notions are equivalent to the approximate notion of prefix competitiveness proposed by Kleinberg et al. (In: Proceedings of the 40th annual symposium on foundations of computer science, p. 568, 2001), as well as to the notion of approximate majorization, and they generalize the well studied notion of max-min fairness. We resolve a question posed by Goel et al. (In: Symposium on discrete algorithms, pp. 384–390, 2005) proving that the greedy strategy is globally O(log?m)-fair, where m denotes the number of machines. This result improves upon the analysis of Goel et al. (In: Symposium on discrete algorithms, pp. 384–390, 2005) who showed that the greedy strategy is globally O(log?n)-fair, where n is the number of jobs. Typically, n?m, and therefore our improvement is significant. Our proof matches the known lower bound for the problem with respect to the measure of global fairness. The improved bound is obtained by analyzing, in a more accurate way, the more general restricted assignment model studied previously in Azar et al. (J. Algorithms 18:221–237, 1995). We provide an alternative bound which is not worse than the bounds of Azar et al. (J. Algorithms 18:221–237, 1995), and it is strictly better in many cases. The bound we prove is, in fact, much more general and it bounds the load on any prefix of most loaded machines. As a corollary from this more general bound we find that the greedy algorithm results in an assignment that is globally O(log?m)-balanced. The last result generalizes the previous result of Goel et al. (In: Symposium on discrete algorithms, pp. 384–390, 2005) who proved that the greedy algorithm yields an assignment that is globally O(log?m)-balanced for the 1-∞ model.  相似文献   

11.
This work addresses the problem of fault detection and diagnosis (FDD) for a quad-rotor mini air vehicle (MAV). Actuator faults are considered on this paper. The basic idea behind the proposed method is to estimate the faults signals using the extended state observers theory. To estimate the faults, a polynomial observer (Aguilar et al. 2011; Mata-Machuca et al., Commun Nonlinear Sci Numer Simul 15(12):4114–4130, 2010, BioSystems 100(1):65–69, 2010) is presented by using the available measurements and know inputs of the system. In order to investigate the diagnosability properties of the system, a differential algebra approach is proposed (Cruz-Victoria et al., J Frankl Inst 345(2):102–118, 2008; and Martinez-Guerra and Diop, IEE P-Contr Theor Ap 151(1):130–135, 2004). The effectiveness of the methodology is illustrated by means of numerical simulations.  相似文献   

12.
Quantum correlations in qutrit Werner states are extensively investigated with five popular methods, namely, original quantum discord (OQD) (Ollivier and Zurek in Phys Rev Lett 88:017901, 2001), measurement-induced disturbance (MID) (Luo in Phys Rev A 77:022301, 2008), ameliorated MID (AMID) (Girolami et al. in J Phys A Math Theor 44:352002, 2011), relative entropy (RE) (Modi et al. in Phys Rev Lett 104:080501, 2010) and geometric discord (GD) (Daki? et al. in Phys Rev Lett 105:190502, 2010). Two different analytic expressions of quantum correlations are derived. Quantum correlations captured by the former four methods are same and bigger than those obtained via the GD method. Nonetheless, they all qualitatively characterize quantum correlations in the concerned states. Moreover, as same as the qubit case, there exist quantum correlations in separable qutrit Werner states, too.  相似文献   

13.
K-anonymity (Samarati and Sweeny 1998; Samarati, IEEE Trans Knowl Data Eng, 13(6):1010–1027, 2001; Sweeny, Int J Uncertain, Fuzziness Knowl-Based Syst, 10(5):557–570, 2002) and its variants, l-diversity (Machanavajjhala et al., ACM TKDD, 2007) and tcloseness (Li et al. 2007) among others are anonymization techniques for relational data and transaction data, which are used to protect privacy against re-identification attacks. A relational dataset D is k-anonymous if every record in D has at least k-1 other records with identical quasi-identifier attribute values. The combination of released data with external data will never allow the recipient to associate each released record with less than k individuals (Samarati, IEEE Trans Knowl Data Eng, 13(6):1010–1027, 2001). However, the current concept of k-anonymity on transaction data treats all items as quasi-identifiers. The anonymized data set has k identical transactions in groups and suffers from lower data utility (He and Naughton 2009; He et al. 2011; Liu and Wang 2010; Terrovitis et al., VLDB J, 20(1):83–106, 2011; Terrovitis et al. 2008). To improve the utility of anonymized transaction data, this work proposes a novel anonymity concept on transaction data that contain both quasi-identifier items (QID) and sensitive items (SI). A transaction that contains sensitive items must have at least k-1 other identical transactions (Ghinita et al. IEEE TKDE, 33(2):161–174, 2011; Xu et al. 2008). For a transaction that does not contain a sensitive item, no anonymization is required. A transaction dataset that satisfies this property is said to be sensitive k-anonymous. Three algorithms, Sensitive Transaction Neighbors (STN) Gray Sort Clustering (GSC) and Nearest Neighbors for K-anonymization (K-NN), are developed. These algorithms use adding/deleting QID items and only adding SI to achieve sensitive k-anonymity on transaction data. Additionally, a simple “privacy value” is proposed to evaluate the degree of privacy for different types of k-anonymity on transaction data. Extensive numerical simulations were carried out to demonstrate the characteristics of the proposed algorithms and also compared to other types of k-anonymity approaches. The results show that each technique possesses its own advantage under different criteria such as running time, operation, and information loss. The results obtained here can be used as a guideline of the selection of anonymization technique on different data sets and for different applications.  相似文献   

14.
In a very recent paper, Peng and Liu (Neural Comput Appl 20:543–547, 2011) investigated the pth moment stability of the stochastic Grossberg–Hopfield neural networks with Markov volatilities by Mao et al. (Bernoulli 6:73–90, 2000, Theorem 4.1). We should point out that Mao et al. (Bernoulli 6:73–90, 2000, Theorem 4.1) investigated the pth moment exponentially stable for a class of stochastic dynamical systems with constant delay; however, this theorem cannot apply to the case of variable time delays. It is also worthy to emphasize that Peng and Liu (Neural Comput Appl 20:543–547, 2011) discussed by Mao et al. (Bernoulli 6:73–90, 2000, Theorem 4.1) the pth moment exponentially stable for the Grossberg–Hopfield neural networks with variable delays, and therefore, there are some gaps between Peng and Liu (Neural Comput Appl 20:543–547, 2011, Theorem 1) and Mao et al. (Bernoulli 6:73–90, 2000, Theorem 4.1). In this paper, we fill up this gap. Moreover, a numerical example is also provided to demonstrate the effectiveness and applicability of the theoretical results.  相似文献   

15.
16.
The routing of traffic between Internet domains, or Autonomous Systems (ASes), a task known as interdomain routing, is currently handled by the Border Gateway Protocol (BGP, Rekhter and Li in RFC 4271 of the Internet Engineering Task Force, 2006). Using BGP, ASes can apply semantically rich routing policies to choose interdomain routes in a distributed fashion. This expressiveness in routing-policy choice supports domains?? autonomy in network operations and in business decisions, but it comes at a price: The interaction of locally defined routing policies can lead to unexpected global anomalies, including route oscillations or overall protocol divergence (see, e.g., Varadhan et?al. in Comput Networks 32(1):1?C16, 2000). Networking researchers have addressed this problem by devising constraints on policies that guarantee BGP convergence without unduly limiting expressiveness and autonomy (see, e.g., Gao and Rexford in IEEE/ACM Trans Network 9(6):681?C692, 2001; Griffin et?al. in Proceedings of 9th ACM Conference on Applications, Technologies, Architectures, and Protocols for Computer Communication (SIGCOMM??03), pp. 61?C72. ACM Press, New York, 2003). In addition to taking this engineering or ??protocol- design?? approach, researchers have approached interdomain routing from an economic or ??mechanism-design?? point of view. It is known that lowest-cost-path (LCP) routing can be implemented in an incentive-compatible, BGP-compatible manner (Feigenbaum et?al. in Distribut. Comput 18(1):61?C72, 2005; Shneidman and Parkes in Proceedings of 23rd ACM Symposium on Principles of Distributed Computing (PODC??04), pp. 88?C97. ACM Press, New York, 2004) but that several other natural classes of policies cannot (Feigenbaum et?al. in Theor Comput Sci 378(2):175?C189, 2007; Feigenbaum et?al. in Distribut Comput 18(4):293?C305, 2006). In this paper, we present the first example of a class of interdomain-routing policies that is more general than LCP routing and for which BGP itself is both incentive-compatible and guaranteed to converge. We also present several steps toward a general theory of incentive-compatible, BGP-compatible interdomain routing.  相似文献   

17.
In this paper design aspects and challenging packaging solution of a monolithic 3D force sensor will be presented. The previously developed design and process flow (Vázsonyi et al. 123–124:620–626, 2005; Molnár et al. 90:40–43, 2012) were improved by an additional hybrid wafer bonding step of simultaneous anodic and metal bonding processes. This electrostatic force assisted metal bonding can ensure both the mechanical and the electrical integrity of the device. The applied novel process sequence can eliminate the need of a possible flip-chip bonding and chemical–mechanical polishing steps. The applied glass substrate improves the thermal isolation and thermo-mechanical stability of the integrated system considering the thermal expansion coefficients of the chosen glass material and the silicon (Si) only slightly differ minimizing the residual thermo-mechanical stress during the operation.  相似文献   

18.
Given natural limitations on the length DNA sequences, designing phylogenetic reconstruction methods which are reliable under limited information is a crucial endeavor. There have been two approaches to this problem: reconstructing partial but reliable information about the tree (Mossel in IEEE Comput. Biol. Bioinform. 4:108–116, 2007; Daskalakis et al. in SIAM J. Discrete Math. 25:872–893, 2011; Daskalakis et al. in Proc. of RECOMB 2006, pp. 281–295, 2006; Gronau et al. in Proc. of the 19th Annual SODA 2008, pp. 379–388, 2008), and reaching “deeper” in the tree through reconstruction of ancestral sequences. In the latter category, Daskalakis et al. (Proc. of the 38th Annual STOC, pp. 159–168, 2006) settled an important conjecture of M. Steel (My favourite conjecture. Preprint, 2001), showing that, under the CFN model of evolution, all trees on n leaves with edge lengths bounded by the Ising model phase transition can be recovered with high probability from genomes of length O(logn) with a polynomial time algorithm. Their methods had a running time of O(n 10). Here we enhance our methods from Daskalakis et al. (Proc. of RECOMB 2006, pp. 281–295, 2006) with the learning of ancestral sequences and provide an algorithm for reconstructing a sub-forest of the tree which is reliable given available data, without requiring a-priori known bounds on the edge lengths of the tree. Our methods are based on an intuitive minimum spanning tree approach and run in O(n 3) time. For the case of full reconstruction of trees with edges under the phase transition, we maintain the same asymptotic sequence length requirements as in Daskalakis et al. (Proc. of the 38th Annual STOC, pp. 159–168, 2006), despite the considerably faster running time.  相似文献   

19.
Nonlocal total variation (TV) regularization (Gilboa and Osher in Multiscale Model Simulat 7(3): 1005–1028, 2008; Zhou and Schölkopf in Pattern recognition, proceedings of the 27th DAGM symposium. Springer, Berlin, pp 361–368, 2005) has been widely used for the natural image processing, since it is able to preserve repetitive textures and details of images. However, its applications have been limited in practice, due to the high computational cost for large scale problems. In this paper, we apply domain decomposition methods (DDMs) (Xu et al. in Inverse Probl Imag 4(3):523–545, 2010) to the nonlocal TV image restoration. By DDMs, the original problem is decomposed into much smaller subproblems defined on subdomains. Each subproblem can be efficiently solved by the split Bregman algorithm and Bregmanized operator splitting algorithm in Zhang et al. (SIAM J Imaging Sci 3(3):253–276, 2010). Furthermore, by using coloring technique on the domain decomposition, all subproblems defined on subdomains with same colors can be computed in parallel. Our numerical examples demonstrate that the proposed methods can efficiently solve the nonlocal TV based image restoration problems, such as denoising, deblurring and inpainting. It can be computed in parallel with a considerable speedup ratio and speedup efficiency.  相似文献   

20.
Flying height control (TFC) sliders with thermal actuation, which make it possible to control head disk spacing, have been introduced in commercial products for compensating the flying height loss and reducing the risk of head disk contacts, thus to increase the bit density (Gupta et al. in ASME J Tribol 123:380–387, 2001; Juang and Bogy in ASME J Tribol 129:570–578, 2007; Kurita et al. in Microsyst Technol 12:369–375, 2006; Shiramatsu et al. in IEEE Trans Magn 42:2513–2515, 2006). However, with the increasing of areal density, it is also necessary to increase the track density. To increase track density, it is required to improve the performances of head positioning system in terms of fast transition from one track to another (track seeking), fast and accurate settling, and precise track following of the target track. Dual-actuator systems (Choe in A thermal driven micro actuator for hard disk drive. In: Proceedings of the APMRC 2010, Nov 10–12, 2010, Singapore, 2010; Bain et al. in Electrothermal actuator for hard disk drive application. In: Proceedings of the APMRC 2010, Nov. 10–12, 2010, Singapore, 2010; Furukawa et al. in Fabrication and test of thermal actuator. In: ISPS 2011, Jun. 13–14, Santa Clara, CA, USA, 2011) have been proposed to meet these requirements. These dual-actuator systems consist of a voice-coil-motor (VCM) as a first-stage actuator and a transducer (piezoelectric, electromagnetic, electrostatic and thermal) as a second-stage actuator. The second-stage actuator could be designed to actuate the movement of suspension (suspension driven), slider (slider driven) or head element (head driven). Most of reported dual-actuator systems were made to be suspension driven or slider driven. Recently, Choe by (A thermal driven micro actuator for hard disk drive. In: Proceedings of the APMRC 2010, Nov. 10–12, 2010, Singapore, 2010) and Bain et al. by (Electrothermal actuator for hard disk drive application. In: Proceedings of the APMRC 2010, Nov. 10–12, 2010, Singapore, 2010) reported to use thermal actuators for driving head movement. They attained a thermal transient of less than 10?μs using 2-D finite element simulation. Using thermal actuators to accurately position read/write element could be a promising technology for mass production for future HDD. This kind of control theme was termed as thermal positioning control (TPC). The objective of TPC actuator design is to achieve large actuation stroke as well as increase frequency bandwidth. In our studies, the design procedure may involve several steps: (1) Fundamental studies with simple TPC slider structure by finite element simulations to explore the feasibility of TPC actuation and estimate working frequency range. Also we may be able to find out the problems which induced by TPC actuator. (2) Prototyped TPC slider, and tested its frequency characteristics to confirm the feasibility and achievability of TPC actuation. (3) Increases TPC actuation stroke and frequency bandwidth by improving TPC slider structures and servo control schemes. This paper explores the feasibility studies of TPC slider by finite element simulation. The principle and structural modeling of slider with TPC heater was first introduced. Then static–static simulation was carried out to study the steady deformation displacement at read/write element and transient analysis was conducted to estimate the deformation displacement response. It was found that 7?nm deformation stroke at read/write element could be attained at steady state with 50?mW input power, and the deformation displacement was about 1.7?nm after power was applied to TPC heater 1.5?ms (frequency of 1?kHz based on first order delay system). Meanwhile, it was found that protrusion on the air bearing surface (ABS) becomes a problem for the slider’s flying performance, thus the ABS design was improved to reduce protrusion’s effect, and cross-talk effect between TFC and TPC actuators was then investigated.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号