首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   24篇
  免费   1篇
化学工业   2篇
机械仪表   3篇
无线电   1篇
一般工业技术   2篇
冶金工业   1篇
原子能技术   1篇
自动化技术   15篇
  2021年   2篇
  2020年   1篇
  2018年   2篇
  2016年   1篇
  2014年   5篇
  2013年   1篇
  2010年   3篇
  2009年   1篇
  2008年   1篇
  2007年   1篇
  2006年   1篇
  2005年   1篇
  2002年   1篇
  1999年   2篇
  1995年   1篇
  1984年   1篇
排序方式: 共有25条查询结果,搜索用时 140 毫秒
1.
Journal of Mathematical Imaging and Vision - In this paper, we address the problem of denoising images obtained under low-light conditions for the Poisson shot noise model. Under such conditions,...  相似文献   
2.
This research aims to probe the porosity profile and polymerization shrinkage of two different dual cure resin cements with different dentin bonding systems. The self‐adhesive resin cement RelyX U200 (named RU) and the conventional Allcem Core (named AC) were analyzed by x‐ray microtomography (μCT) and Scanning Electron Microscopy (SEM). Each cement was divided into two groups (n = 5): dual‐cured (RUD and ACD) and self‐cured (RUC and ACC). μCT demonstrated that the method of polymerization does not influence the porosity profile but the polymerization shrinkage. Fewer concentration of pores was observed for the conventional resin cement (AC), independently the method used for curing the sample. In addition, SEM showed that AC has more uniform surface and smaller particle size. The method of polymerization influenced the polymerization shrinkage, since no contraction for both RUC and ACC was observed, in contrast with results from dual‐cured samples. For RUD and ACD the polymerization shrinkage was greater in the lower third of the sample and minor in the upper third. This mechanical behavior is attributed to the polymerization toward the light. µCT showed to be a reliable technique to probe porosity and contraction due to polymerization of dental cements.  相似文献   
3.
4.
Discrete optimization problems arise in a variety of domains, such as VLSI design, transportation, scheduling and management, and design optimization. Very often, these problems are solved using state space search techniques. Due to the high computational requirements and inherent parallel nature of search techniques, there has been a great deal of interest in the development of parallel search methods since the dawn of parallel computing. Significant advances have been made in the use of powerful heuristics and parallel processing to solve large-scale discrete optimization problems. Problem instances that were considered computationally intractable only a few years ago are routinely solved currently on server-class symmetric multiprocessors and small workstation clusters. Parallel game-playing programs are challenging the best human minds at games like chess. In this paper, we describe the state of the art in parallel algorithms used for solving discrete optimization problems. We address heuristic and nonheuristic techniques for searching graphs as well as trees, and speed-up anomalies in parallel search that are caused by the inherent speculative nature of search techniques  相似文献   
5.
Comparative analyses of graph-structured datasets underly diverse problems. Examples of these problems include identification of conserved functional components (biochemical interactions) across species, structural similarity of large biomolecules, and recurring patterns of interactions in social networks. A large class of such analyses methods quantify the topological similarity of nodes across networks. The resulting correspondence of nodes across networks, also called node alignment, can be used to identify invariant subgraphs across the input graphs. Given \(k\) graphs as input, alignment algorithms use topological information to assign a similarity score to each \(k\) -tuple of nodes, with elements (nodes) drawn from each of the input graphs. Nodes are considered similar if their neighbors are also similar. An alternate, equivalent view of these network alignment algorithms is to consider the Kronecker product of the input graphs and to identify high-ranked nodes in the Kronecker product graph. Conventional methods such as PageRank and HITS (Hypertext-Induced Topic Selection) can be used for this purpose. These methods typically require computation of the principal eigenvector of a suitably modified Kronecker product matrix of the input graphs. We adopt this alternate view of the problem to address the problem of multiple network alignment. Using the phase estimation algorithm, we show that the multiple network alignment problem can be efficiently solved on quantum computers. We characterize the accuracy and performance of our method and show that it can deliver exponential speedups over conventional (non-quantum) methods.  相似文献   
6.
We present a general quantum circuit design for finding eigenvalues of non-unitary matrices on quantum computers using the iterative phase estimation algorithm. In addition, we show how the method can be used for the simulation of resonance states for quantum systems.  相似文献   
7.
We present an efficient randomized algorithm for leader election in large-scale distributed systems. The proposed algorithm is optimal in message complexity (O(n) for a set of n total processes), has round complexity logarithmic in the number of processes in the system, and provides high probabilistic guarantees on the election of a unique leader. The algorithm relies on a balls and bins abstraction and works in two phases. The main novelty of the work is in the first phase where the number of contending processes is reduced in a controlled manner. Probabilistic quorums are used to determine a winner in the second phase. We discuss, in detail, the synchronous version of the algorithm, provide extensions to an asynchronous version and examine the impact of failures.  相似文献   
8.
To resolve the image deconvolution problem, the total variation (TV) minimization approach has been proved to be very efficient. However, we observe that this approach has an over-minimizing TV effect in the sense that it gives a solution whose TV is usually smaller than that of the original image. This effect is due to the pre-pondering role of the TV in the corresponding minimization problem and prevents from finding the exact solution of the deconvolution problem when such a solution exists. We propose a modified version of the gradient descent algorithm, which leads to an exact solution of the deconvolution problem if it exists and to a satisfactory approximative solution if there is no exact one. The idea consists in introducing a control on the contribution of the TV in the classical gradient descent algorithm. The new algorithm has the advantage that the restored image has the TV closer to that of the original image, compared to the classical gradient descent approach. Numerical results show that our method is competitive compared to some recent ones.  相似文献   
9.
Ramakrishnan  N. Grama  A.Y. 《Computer》1999,32(8):34-37
The idea of unsupervised learning from basic facts (axioms) or from data has fascinated researchers for decades. Knowledge discovery engines try to extract general inferences from facts or training data. Statistical methods take a more structured approach, attempting to quantify data by known and intuitively understood models. The problem of gleaning knowledge from existing data sources poses a significant paradigm shift from these traditional approaches. The size, noise, diversity, dimensionality, and distributed nature of typical data sets make even formal problem specification difficult. Moreover, you typically do not have control over data generation. This lack of control opens up a Pandora's box filled with issues such as overfitting, limited coverage, and missing/incorrect data with high dimensionality. Once specified, solution techniques must deal with complexity, scalability (to meaningful data sizes), and presentation. This entire process is where data mining makes its transition from serendipity to science  相似文献   
10.
The past few years have seen tremendous advances in distributed storage infrastructure. Unstructured and structured overlay networks have been successfully used in a variety of applications, ranging from file-sharing to scientific data repositories. While unstructured networks benefit from low maintenance overhead, the associated search costs are high. On the other hand, structured networks have higher maintenance overheads, but facilitate bounded time search of installed keywords. When dealing with typical data sets, though, it is infeasible to install every possible search term as a keyword into the structured overlay.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号