首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   194篇
  免费   1篇
电工技术   1篇
化学工业   22篇
金属工艺   2篇
机械仪表   2篇
建筑科学   18篇
能源动力   3篇
轻工业   6篇
水利工程   1篇
石油天然气   1篇
无线电   29篇
一般工业技术   25篇
冶金工业   23篇
自动化技术   62篇
  2023年   1篇
  2022年   5篇
  2021年   5篇
  2020年   4篇
  2019年   3篇
  2018年   3篇
  2017年   4篇
  2016年   5篇
  2015年   8篇
  2014年   2篇
  2013年   9篇
  2012年   9篇
  2011年   9篇
  2010年   16篇
  2009年   7篇
  2008年   16篇
  2007年   6篇
  2006年   11篇
  2005年   6篇
  2004年   6篇
  2003年   7篇
  2002年   2篇
  2001年   1篇
  2000年   1篇
  1999年   2篇
  1998年   11篇
  1997年   3篇
  1996年   3篇
  1995年   4篇
  1994年   2篇
  1993年   7篇
  1990年   1篇
  1989年   2篇
  1988年   1篇
  1986年   1篇
  1985年   2篇
  1984年   4篇
  1983年   3篇
  1982年   1篇
  1973年   1篇
  1966年   1篇
排序方式: 共有195条查询结果,搜索用时 15 毫秒
1.
Operator splitting is a powerful concept used in many diversed fields of applied mathematics for the design of effective numerical schemes. Following the success of the additive operator splitting (AOS) in performing an efficient nonlinear diffusion filtering on digital images, we analyze the possibility of using multiplicative operator splittings to process images from different perspectives.We start by examining the potential of using fractional step methods to design a multiplicative operator splitting as an alternative to AOS schemes. By means of a Strang splitting, we attempt to use numerical schemes that are known to be more accurate in linear diffusion processes and apply them on images. Initially we implement the Crank-Nicolson and DuFort-Frankel schemes to diffuse noisy signals in one dimension and devise a simple extrapolation that enables the Crank-Nicolson to be used with high accuracy on these signals. We then combine the Crank-Nicolson in 1D with various multiplicative operator splittings to process images. Based on these ideas we obtain some interesting results. However, from the practical standpoint, due to the computational expenses associated with these schemes and the questionable benefits in applying them to perform nonlinear diffusion filtering when using long timesteps, we conclude that AOS schemes are simple and efficient compared to these alternatives.We then examine the potential utility of using multiple timestep methods combined with AOS schemes, as means to expedite the diffusion process. These methods were developed for molecular dynamics applications and are used efficiently in biomolecular simulations. The idea is to split the forces exerted on atoms into different classes according to their behavior in time, and assign longer timesteps to nonlocal, slowly-varying forces such as the Coulomb and van der Waals interactions, whereas the local forces like bond and angle are treated with smaller timesteps. Multiple timestep integrators can be derived from the Trotter factorization, a decomposition that bears a strong resemblance to a Strang splitting. Both formulations decompose the time propagator into trilateral products to construct multiplicative operator splittings which are second order in time, with the possibility of extending the factorization to higher order expansions. While a Strang splitting is a decomposition across spatial dimensions, where each dimension is subsequently treated with a fractional step, the multiple timestep method is a decomposition across scales. Thus, multiple timestep methods are a realization of the multiplicative operator splitting idea. For certain nonlinear diffusion coefficients with favorable properties, we show that a simple multiple timestep method can improve the diffusion process.  相似文献   
2.
A normal mode analysis is presented and numerical tests are performed to assess the effectiveness of a new time-splitting algorithm proposed recently in Karniadakiset al. (1990) for solving the incompressible Navier-Stokes equations. This new algorithm employs high-order explicit pressure boundary conditions and mixed explicit/implicit stiffly stable time-integration schemes, which can lead to arbitrarily high-order accuracy in time. In the current article we investigate both the time accuracy of the new scheme as well as the corresponding reduction in boundary-divergence errors for two model flow problems involving solid boundaries. The main finding is that time discretization errors, induced by the nondivergent splitting mode, scale with the order of the accuracy of the integration rule employed if a proper rotational form of the pressure boundary condition is used; otherwise a first-order accuracy in time similar to the classical splitting methods is achieved. In the former case the corresponding errors in divergence can be completely eliminated, while in the latter case they scale asO(vt)1/2.  相似文献   
3.
Anatomic snapshot memory object in shared memory systems enables a set of processes, calledscanners, to obtain a consistent picture of the shared memory while other processes, calledupdaters, keep updating memory locations concurrently. In this paper we present two conversion methods of snapshot implementations. Using the first conversion method we obtain a new snapshot implementation in which the scan operation has linear time complexity and the time complexity of the update operation becomes the sum of the time complexities of the original implementation. Applying the second conversion method yields similar results, where in this case the time complexity of the update protocol becomes linear. Although our conversion methods use unbounded space, their space complexity can be bounded using known techniques. One of the most intriguing open problems in distributed wait-free computing is the existence of a linear-time implementation of this object. Using our conversion methods and known constructions we obtain the following results:
  • ?Consider a system ofn processes, each an updater and a scanner. We present an implementation in which the time complexity of either the update or the scan operation is linear, while the time complexity of the second operation isO(n logn).
  • ?We present an implementation with linear time complexity when the number of either updaters or scanners isO(n/logn), wheren is the total number of processes.
  • ?We present an implementation with amortized linear time complexity when one of the protocols (either upate or scan) is executed significantly more often than the other protocol.
  •   相似文献   
    4.
    5.
    Personalization technologies offer powerful tools for enhancing the user experience in a wide variety of systems, but at the same time raise new privacy concerns. For example, systems that personalize advertisements according to the physical location of the user or according to the user??s friends?? search history, introduce new privacy risks that may discourage wide adoption of personalization technologies. This article analyzes the privacy risks associated with several current and prominent personalization trends, namely social-based personalization, behavioral profiling, and location-based personalization. We survey user attitudes towards privacy and personalization, as well as technologies that can help reduce privacy risks. We conclude with a discussion that frames risks and technical solutions in the intersection between personalization and privacy, as well as areas for further investigation. This frameworks can help designers and researchers to contextualize privacy challenges of solutions when designing personalization systems.  相似文献   
    6.
    Jehuda  Jair  Israeli  Amos 《Real-Time Systems》1998,14(2):107-134
    The software meta-controller is an online agent responsible for dynamically adapting an application's software configuration, e.g. altering operational modes and migrating tasks, to best accommodate varying runtime circumstances. In distributed real-time applications such adaptations must be carried out in a manner which maintains the schedulability of all critical tasks while maximizing some notion of system value for all other tasks. For large-scale real-time applications, considering all possible adaptations at the task-level is computationally intractable. This paper presents an automated aggregate approach to software meta-control, appropriate for large-scale distributed real-time systems. The aggregate automated meta-control problem is still NP-hard, but it has very practical approximate solutions. Introduced, here, are two very-effective approximation algorithms, QDP and GG, with very reasonable polynomial time complexity. Both algorithms also provide us with upper bounds for optimum system values, useful for deriving absolute, albeit somewhat pessimistic, measures of actual performance. Extensive Monte Carlo analysis is used to illustrate that expected performance for both algorithms is generally suboptimal by no more than a few percent. Our flexible software meta-control model is also shown to be readily applied to a wide range of time-sensitive applications.  相似文献   
    7.
    The paper focuses on mining clusters that are characterized by a lagged relationship between the data objects. We call such clusters lagged co-clusters. A lagged co-cluster of a matrix is a submatrix determined by a subset of rows and their corresponding lag over a subset of columns. Extracting such subsets may reveal an underlying governing regulatory mechanism. Such a regulatory mechanism is quite common in real-life settings. It appears in a variety of fields: meteorology, seismic activity, stock market behavior, neuronal brain activity, river flow, and navigation, but a limited list of examples. Mining such lagged co-clusters not only helps in understanding the relationship between objects in the domain, but assists in forecasting their future behavior. For most interesting variants of this problem, finding an optimal lagged co-cluster is NP-complete problem. We present a polynomial-time Monte-Carlo algorithm for mining lagged co-clusters. We prove that, with fixed probability, the algorithm mines a lagged co-cluster which encompasses the optimal lagged co-cluster by a maximum 2 ratio columns overhead and completely no rows overhead. Moreover, the algorithm handles noise, anti-correlations, missing values, and overlapping patterns. The algorithm is extensively evaluated using both artificial and real-world test environments. The first enable the evaluation of specific, isolated properties of the algorithm. The latter (river flow and topographic data) enable the evaluation of the algorithm to efficiently mine relevant and coherent lagged co-clusters in environments that are temporal, i.e., time reading data and non-temporal.  相似文献   
    8.
    This paper explores the scale and scope of transformations in the environmental planning field, and the factors that may advance or impede their widespread adoption. A conceptual model is offered which examines scope (defined as type, breadth and structure of the transformation), and the scale of its impact (categorized as stakeholder, organizational, institutional or societal) and applies it to the analysis of several cases in Israel where environmental transformations, affecting the way in which planning is conducted, have been adopted. Conclusions include identification of conditions for facilitating and advancing transformations, including knowledge of innovative alternatives, initiative, willingness to adopt new practices, and identification of policy windows that emerge during conflict, reform, or crisis. The fostering of relations between environmental non-government organizations and planning systems and leadership roles are also significant in catalyzing environmental transformation.  相似文献   
    9.
    Schemes for image compression of black-and-white images based on the wavelet transform are presented. The multiresolution nature of the discrete wavelet transform is proven as a powerful tool to represent images decomposed along the vertical and horizontal directions using the pyramidal multiresolution scheme. The wavelet transform decomposes the image into a set of subimages called shapes with different resolutions corresponding to different frequency bands. Hence, different allocations are tested, assuming that details at high resolution and diagonal directions are less visible to the human eye. The resultant coefficients are vector quantized (VQ) using the LGB algorithm. By using an error correction method that approximates the reconstructed coefficients quantization error, we minimize distortion for a given compression rate at low computational cost. Several compression techniques are tested. In the first experiment, several 512x512 images are trained together and common table codes created. Using these tables, the training sequence black-and-white images achieve a compression ratio of 60-65 and a PSNR of 30-33. To investigate the compression on images not part of the training set, many 480x480 images of uncalibrated faces are trained together and yield global tables code. Images of faces outside the training set are compressed and reconstructed using the resulting tables. The compression ratio is 40; PSNRs are 30-36. Images from the training set have similar compression values and quality. Finally, another compression method based on the end vector bit allocation is examined.  相似文献   
    10.
    本文涵盖了关于DSP的各种问题,包括DSP如何定义、为何需要DSP、如何实现DSP功能等,并介绍了一些认知上的陷阱。  相似文献   
    设为首页 | 免责声明 | 关于勤云 | 加入收藏

    Copyright©北京勤云科技发展有限公司  京ICP备09084417号