首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   89450篇
  免费   1509篇
  国内免费   423篇
电工技术   842篇
综合类   2327篇
化学工业   13092篇
金属工艺   4955篇
机械仪表   3299篇
建筑科学   2248篇
矿业工程   564篇
能源动力   1450篇
轻工业   4003篇
水利工程   1274篇
石油天然气   346篇
武器工业   1篇
无线电   10296篇
一般工业技术   17642篇
冶金工业   2856篇
原子能技术   294篇
自动化技术   25893篇
  2023年   68篇
  2022年   102篇
  2021年   176篇
  2020年   137篇
  2019年   187篇
  2018年   14622篇
  2017年   13553篇
  2016年   10163篇
  2015年   820篇
  2014年   534篇
  2013年   587篇
  2012年   3575篇
  2011年   9857篇
  2010年   8630篇
  2009年   5916篇
  2008年   7084篇
  2007年   8011篇
  2006年   369篇
  2005年   1413篇
  2004年   1316篇
  2003年   1384篇
  2002年   709篇
  2001年   220篇
  2000年   308篇
  1999年   183篇
  1998年   190篇
  1997年   110篇
  1996年   117篇
  1995年   61篇
  1994年   56篇
  1993年   38篇
  1992年   54篇
  1991年   46篇
  1990年   28篇
  1989年   29篇
  1988年   21篇
  1969年   24篇
  1968年   43篇
  1967年   33篇
  1966年   42篇
  1965年   44篇
  1963年   28篇
  1962年   22篇
  1960年   30篇
  1959年   35篇
  1958年   37篇
  1957年   36篇
  1956年   34篇
  1955年   63篇
  1954年   68篇
排序方式: 共有10000条查询结果,搜索用时 0 毫秒
991.
We introduce and study a two-dimensional variational model for the reconstruction of a smooth generic solid shape E, which may handle the self-occlusions and that can be considered as an improvement of the 2.1D sketch of Nitzberg and Mumford (Proceedings of the Third International Conference on Computer Vision, Osaka, 1990). We characterize from the topological viewpoint the apparent contour of E, namely, we characterize those planar graphs that are apparent contours of some shape E. This is the classical problem of recovering a three-dimensional layered shape from its apparent contour, which is of interest in theoretical computer vision. We make use of the so-called Huffman labeling (Machine Intelligence, vol. 6, Am. Elsevier, New York, 1971), see also the papers of Williams (Ph.D. Dissertation, 1994 and Int. J. Comput. Vis. 23:93–108, 1997) and the paper of Karpenko and Hughes (Preprint, 2006) for related results. Moreover, we show that if E and F are two shapes having the same apparent contour, then E and F differ by a global homeomorphism which is strictly increasing on each fiber along the direction of the eye of the observer. These two topological theorems allow to find the domain of the functional ℱ describing the model. Compactness, semicontinuity and relaxation properties of ℱ are then studied, as well as connections of our model with the problem of completion of hidden contours.
Maurizio PaoliniEmail:
  相似文献   
992.
Software agents’ ability to interact within different open systems, designed by different groups, presupposes an agreement on an unambiguous definition of a set of concepts, used to describe the context of the interaction and the communication language the agents can use. Agents’ interactions ought to allow for reliable expectations on the possible evolution of the system; however, in open systems interacting agents may not conform to predefined specifications. A possible solution is to define interaction environments including a normative component, with suitable rules to regulate the behaviour of agents. To tackle this problem we propose an application-independent metamodel of artificial institutions that can be used to define open multiagent systems. In our view an artificial institution is made up by an ontology that models the social context of the interaction, a set of authorizations to act on the institutional context, a set of linguistic conventions for the performance of institutional actions and a system of norms that are necessary to constrain the agents’ actions.  相似文献   
993.
Infonorma is a multi-agent system that provides its users with recommendations of legal normative instruments they might be interested in. The Filter agent of Infonorma classifies normative instruments represented as Semantic Web documents into legal branches and performs content-based similarity analysis. This agent, as well as the entire Infonorma system, was modeled under the guidelines of MAAEM, a software development methodology for multi-agent application engineering. This article describes the Infonorma requirements specification, the architectural design solution for those requirements, the detailed design of the Filter agent and the implementation model of Infonorma, according to the guidelines of the MAAEM methodology.  相似文献   
994.
We present here a new randomized algorithm for repairing the topology of objects represented by 3D binary digital images. By “repairing the topology”, we mean a systematic way of modifying a given binary image in order to produce a similar binary image which is guaranteed to be well-composed. A 3D binary digital image is said to be well-composed if, and only if, the square faces shared by background and foreground voxels form a 2D manifold. Well-composed images enjoy some special properties which can make such images very desirable in practical applications. For instance, well-known algorithms for extracting surfaces from and thinning binary images can be simplified and optimized for speed if the input image is assumed to be well-composed. Furthermore, some algorithms for computing surface curvature and extracting adaptive triangulated surfaces, directly from the binary data, can only be applied to well-composed images. Finally, we introduce an extension of the aforementioned algorithm to repairing 3D digital multivalued images. Such an algorithm finds application in repairing segmented images resulting from multi-object segmentations of other 3D digital multivalued images.
James GeeEmail:
  相似文献   
995.
This paper is concerned with the derivation of infinite schedules for timed automata that are in some sense optimal. To cover a wide class of optimality criteria we start out by introducing an extension of the (priced) timed automata model that includes both costs and rewards as separate modelling features. A precise definition is then given of what constitutes optimal infinite behaviours for this class of models. We subsequently show that the derivation of optimal non-terminating schedules for such double-priced timed automata is computable. This is done by a reduction of the problem to the determination of optimal mean-cycles in finite graphs with weighted edges. This reduction is obtained by introducing the so-called corner-point abstraction, a powerful abstraction technique of which we show that it preserves optimal schedules. This work has been mostly done while visiting CISS at Aalborg University in Denmark and has been supported by CISS and by ACI Cortos, a program of the French Ministry of Research.  相似文献   
996.
This paper presents an automated and compositional procedure to solve the substitutability problem in the context of evolving software systems. Our solution contributes two techniques for checking correctness of software upgrades: (1) a technique based on simultaneous use of over-and under-approximations obtained via existential and universal abstractions; (2) a dynamic assume-guarantee reasoning algorithm—previously generated component assumptions are reused and altered on-the-fly to prove or disprove the global safety properties on the updated system. When upgrades are found to be non-substitutable, our solution generates constructive feedback to developers showing how to improve the components. The substitutability approach has been implemented and validated in the ComFoRT reasoning framework, and we report encouraging results on an industrial benchmark. This is an extended version of a paper, Dynamic Component Substitutability Analysis, published in the Proceedings of the Formal Methods 2005 Conference, Lecture Notes in Computer Science, vol. 3582, by the same authors. This research was sponsored by the National Science Foundation under grant nos. CNS-0411152, CCF-0429120, CCR-0121547, and CCR-0098072, the Semiconductor Research Corporation under grant no. TJ-1366, the US Army Research Office under grant no. DAAD19-01-1-0485, the Office of Naval Research under grant no. N00014-01-1-0796, the ICAST project and the Predictable Assembly from Certifiable Components (PACC) initiative at the Software Engineering Institute, Carnegie Mellon University. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of any sponsoring institution, the US government or any other entity.  相似文献   
997.
Measuring the structural similarity between an XML document and a DTD has many relevant applications that range from document classification and approximate structural queries on XML documents to selective dissemination of XML documents and document protection. The problem is harder than measuring structural similarity among documents, because a DTD can be considered as a generator of documents. Thus, the problem is to evaluate the similarity between a document and a set of documents. An effective structural similarity measure should face different requirements that range from considering the presence and absence of required elements, as well as the structure and level of the missing and extra elements to vocabulary discrepancies due to the use of synonymous or syntactically similar tags. In the paper, starting from these requirements, we provide a definition of the measure and present an algorithm for matching a document against a DTD to obtain their structural similarity. Finally, experimental results to assess the effectiveness of the approach are presented.  相似文献   
998.
In our previous work, we introduced a computational architecture that effectively supports the tasks of continuous monitoring and of aggregation querying of complex domain meaningful time-oriented concepts and patterns (temporal abstractions), in environments featuring large volumes of continuously arriving and accumulating time-oriented raw data. Examples include provision of decision support in clinical medicine, making financial decisions, detecting anomalies and potential threats in communication networks, integrating intelligence information from multiple sources, etc. In this paper, we describe the general, domain-independent but task-specific problem-solving method underling our computational architecture, which we refer to as incremental knowledge-based temporal abstraction (IKBTA). The IKBTA method incrementally computes temporal abstractions by maintaining persistence and validity of continuously computed temporal abstractions from arriving time-stamped data. We focus on the computational framework underlying our reasoning method, provide well-defined semantic and knowledge requirements for incremental inference, which utilizes a logical model of time, data, and high-level abstract concepts, and provide a detailed analysis of the computational complexity of our approach.  相似文献   
999.
In this paper, consistency is understood in the standard way, i.e. as the absence of a contradiction. The basic constructive logic BKc4, which is adequate to this sense of consistency in the ternary relational semantics without a set of designated points, is defined. Then, it is shown how to define a series of logics by extending BKc4 up to minimal intuitionistic logic. All logics defined in this paper are paraconsistent logics.  相似文献   
1000.
In this paper we explore differences in use of the so-called ‘logical’ elements of language such as quantifiers and conditionals, and use this to explain differences in performance in reasoning tasks across subject groups with different educational backgrounds. It is argued that quantified sentences are difficult natural bases for reasoning, and hence more prone to elicit variation in reasoning behaviour, because they are chiefly used with a pre-determined domain in everyday speech. By contrast, it is argued that conditional sentences form natural premises because of the function they serve in everyday speech. Implications of this for the role of logic in modelling human reasoning behaviour are briefly considered.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号