首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   198篇
  免费   8篇
电工技术   3篇
化学工业   55篇
金属工艺   2篇
机械仪表   4篇
建筑科学   5篇
矿业工程   1篇
能源动力   18篇
轻工业   8篇
水利工程   1篇
石油天然气   1篇
无线电   23篇
一般工业技术   25篇
冶金工业   8篇
自动化技术   52篇
  2024年   2篇
  2023年   2篇
  2022年   8篇
  2021年   14篇
  2020年   5篇
  2019年   14篇
  2018年   7篇
  2017年   10篇
  2016年   8篇
  2015年   5篇
  2014年   6篇
  2013年   20篇
  2012年   10篇
  2011年   19篇
  2010年   17篇
  2009年   10篇
  2008年   7篇
  2007年   4篇
  2006年   5篇
  2005年   4篇
  2004年   2篇
  2003年   4篇
  2002年   7篇
  2001年   5篇
  2000年   1篇
  1999年   4篇
  1997年   3篇
  1996年   2篇
  1987年   1篇
排序方式: 共有206条查询结果,搜索用时 15 毫秒
1.
Wireless Personal Communications - A relatively new area of research and development is Swarm Robotics. It is a part of the swarm intelligence field. In the proposed paper, we shall use swarm...  相似文献   
2.
Inference of message sequence charts   总被引:1,自引:0,他引:1  
Software designers draw message sequence charts for early modeling of the individual behaviors they expect from the concurrent system under design. Can they be sure that precisely the behaviors they have described are realizable by some implementation of the components of the concurrent system? If so, can we automatically synthesize concurrent state machines realizing the given MSCs? If, on the other hand, other unspecified and possibly unwanted scenarios are "implied" by their MSCs, can the software designer be automatically warned and provided the implied MSCs? In this paper, we provide a framework in which all these questions are answered positively. We first describe the formal framework within which one can derive implied MSCs and then provide polynomial-time algorithms for implication, realizability, and synthesis.  相似文献   
3.
Verma  Amit  Dawar  Siddharth  Kumar  Raman  Navathe  Shamkant  Goyal  Vikram 《Applied Intelligence》2021,51(7):4649-4663

High-utility Itemset Mining (HUIM) finds patterns from a transaction database with their utility no less than a user-defined threshold. The utility of an itemset is defined as the sum of the utilities of its items. The utility notion enables a data analyst to associate a profit score with each item and thereof to a pattern. We extend the notion of high-utility with diversity to define a new pattern type called High-utility and Diverse pattern (HUD). The notion of diversity of a pattern captures the extent of the different categories covered by the selected items in the pattern. An application of diverse-pattern lies in the recommendation task where a system can recommend to a customer a set of items from a new class based on her previously bought items. Our notion of diversity is easy to compute and also captures the basic essence of a previously proposed diversity notion. The existing algorithm to compute frequent-diverse patterns is 2-phase, i.e., in the first phase, frequent patterns are computed, out of which diverse patterns are filtered out in the second phase. We, in this paper, give an integrated algorithm that efficiently computes high-utility and diverse patterns in a single phase. Our experimental study shows that our proposed algorithm is very efficient as compared to a 2-phase algorithm that extracts high-utility itemsets in the first phase and filters out the diverse itemsets in the second phase.

  相似文献   
4.
Microsystem Technologies - Single-Walled Carbon Nanotubes (SWCNTs) are widely used as potential carriers in drug delivery systems. The objective of this work was to observe the effects of pristine,...  相似文献   
5.
In this paper, a simple idea based on midpoint integration rule is utilized to solve a particular class of mechanics problems; namely static problems defined on unbounded domains where the solution is required to be accurate only in an interior (and not in the far field). By developing a finite element mesh that approximates the stiffness of an unbounded domain directly (without approximating the far-field displacement profile first), the current formulation provides a superior alternative to infinite elements (IEs) that have long been used to incorporate unbounded domains into the finite element method (FEM). In contrast to most IEs, the present formulation (a) requires no new shape functions or special integration rules, (b) is proved to be both accurate and efficient, and (c) is versatile enough to handle a large variety of domains including those with anisotropic, stratified media and convex polygonal corners. In addition to this, the proposed model leads to the derivation of a simple error expression that provides an explicit correlation between the mesh parameters and the accuracy achieved. This error expression can be used to calculate the accuracy of a given mesh a-priori. This in-turn, allows one to generate the most efficient mesh capable of achieving a desired accuracy by solving a mesh optimization problem. We formulate such an optimization problem, solve it and use the results to develop a practical mesh generation methodology. This methodology does not require any additional computation on the part of the user, and can hence be used in practical situations to quickly generate an efficient and near optimal finite element mesh that models an unbounded domain to the required accuracy. Numerical examples involving practical problems are presented at the end to illustrate the effectiveness of this method.  相似文献   
6.
It is a well known result in the vision literature that the motion of independently moving objects viewed by an affine camera lie on affine subspaces of dimension four or less. As a result a large number of the recently proposed motion segmentation algorithms model the problem as one of clustering the trajectory data to its corresponding affine subspace. While these algorithms are elegant in formulation and achieve near perfect results on benchmark datasets, they fail to address certain very key real-world challenges, including perspective effects and motion degeneracies. Within a robotics and autonomous vehicle setting, the relative configuration of the robot and moving object will frequently be degenerate leading to a failure of subspace clustering algorithms. On the other hand, while gestalt-inspired motion similarity algorithms have been used for motion segmentation, in the moving camera case, they tend to over-segment or under-segment the scene based on their parameter values. In this paper we present a principled approach that incorporates the strengths of both approaches into a cohesive motion segmentation algorithm capable of dealing with the degenerate cases, where camera motion follows that of the moving object. We first generate a set of prospective motion models for the various moving and stationary objects in the video sequence by a RANSAC-like procedure. Then, we incorporate affine and long-term gestalt-inspired motion similarity constraints, into a multi-label Markov Random Field (MRF). Its inference leads to an over-segmentation, where each label belongs to a particular moving object or the background. This is followed by a model selection step where we merge clusters based on a novel motion coherence constraint, we call in-frame shear, that tracks the in-frame change in orientation and distance between the clusters, leading to the final segmentation. This oversegmentation is deliberate and necessary, allowing us to assess the relative motion between the motion models which we believe to be essential in dealing with degenerate motion scenarios.We present results on the Hopkins-155 benchmark motion segmentation dataset [27], as well as several on-road scenes where camera and object motion are near identical. We show that our algorithm is competitive with the state-of-the-art algorithms on [27] and exceeds them substantially on the more realistic on-road sequences.  相似文献   
7.
Multimedia Tools and Applications - Medical image watermarking is a challenging area of research. High bandwidth, secure transmission of patient’s data among hospitals and hiding capacity are...  相似文献   
8.
Constructing plans that can handle multiple problem instances is a longstanding open problem in AI. We present a framework for generalized planning that captures the notion of algorithm-like plans and unifies various approaches developed for addressing this problem. Using this framework, and building on the TVLA system for static analysis of programs, we develop a novel approach for computing generalizations of classical plans by identifying sequences of actions that will make measurable progress when placed in a loop. In a wide class of problems that we characterize formally in the paper, these methods allow us to find generalized plans with loops for solving problem instances of unbounded sizes and also to determine the correctness and applicability of the computed generalized plans. We demonstrate the scope and scalability of the proposed approach on a wide range of planning problems.  相似文献   
9.
For shapes represented as closed planar contours, we introduce a class of functionals which are invariant with respect to the Euclidean group and which are obtained by performing integral operations. While such integral invariants enjoy some of the desirable properties of their differential counterparts, such as locality of computation (which allows matching under occlusions) and uniqueness of representation (asymptotically), they do not exhibit the noise sensitivity associated with differential quantities and, therefore, do not require presmoothing of the input shape. Our formulation allows the analysis of shapes at multiple scales. Based on integral invariants, we define a notion of distance between shapes. The proposed distance measure can be computed efficiently and allows warping the shape boundaries onto each other; its computation results in optimal point correspondence as an intermediate step. Numerical results on shape matching demonstrate that this framework can match shapes despite the deformation of subparts, missing parts and noise. As a quantitative analysis, we report matching scores for shape retrieval from a database.  相似文献   
10.
The 2010 CAV (Computer-Aided Verification) award was awarded to Kenneth L. McMillan of Cadence Research Laboratories for a series of fundamental contributions resulting in significant advances in scalability of model checking tools. The annual award recognizes a specific fundamental contribution or a series of outstanding contributions to the CAV field.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号