首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   6905篇
  免费   517篇
  国内免费   5篇
电工技术   67篇
综合类   1篇
化学工业   1768篇
金属工艺   179篇
机械仪表   173篇
建筑科学   274篇
矿业工程   25篇
能源动力   266篇
轻工业   949篇
水利工程   46篇
石油天然气   42篇
无线电   503篇
一般工业技术   1517篇
冶金工业   319篇
原子能技术   56篇
自动化技术   1242篇
  2024年   28篇
  2023年   64篇
  2022年   85篇
  2021年   219篇
  2020年   208篇
  2019年   183篇
  2018年   303篇
  2017年   330篇
  2016年   382篇
  2015年   229篇
  2014年   329篇
  2013年   676篇
  2012年   386篇
  2011年   517篇
  2010年   388篇
  2009年   410篇
  2008年   367篇
  2007年   333篇
  2006年   268篇
  2005年   195篇
  2004年   179篇
  2003年   157篇
  2002年   151篇
  2001年   83篇
  2000年   89篇
  1999年   77篇
  1998年   96篇
  1997年   82篇
  1996年   72篇
  1995年   50篇
  1994年   51篇
  1993年   42篇
  1992年   36篇
  1991年   32篇
  1990年   20篇
  1989年   22篇
  1988年   20篇
  1987年   18篇
  1986年   15篇
  1985年   29篇
  1984年   12篇
  1983年   12篇
  1982年   19篇
  1981年   19篇
  1980年   12篇
  1979年   15篇
  1978年   14篇
  1977年   15篇
  1976年   14篇
  1974年   11篇
排序方式: 共有7427条查询结果,搜索用时 31 毫秒
141.
Sustainable supply chain management (SSCM) provides economic, social end environmental requirements in material and service flows occurring between suppliers, manufacturers and customers. SSCM structure is considered as a prerequisite for a sustainable success. Thus designing an effective SCM structure provides competitive advantages for the companies. In order to achieve an effective design of this structure, it is possible to apply quality function deployment (QFD) approach which is successfully applied as an effective product and system development tool. This study presents a decision framework where analytic network process (ANP) integrated QFD and zero-one goal programming (ZOGP) models are used in order to determine the design requirements which are more effective in achieving a sustainable supply chain (SSC). The first phase of the QFD is the house of quality (HOQ) which transforms customer requirements into product design requirements. In this study, after determining the sustainability requirements named customer requirements (CRs) and design requirements (DRs) of a SSC, ANP is employed to determine the importance levels in the HOQ considering the interrelationships among the DRs and CRs. Furthermore ZOGP approach is used to take into account different objectives of the problem. The proposed method is applied through a case study and obtained results are discussed.  相似文献   
142.
This study investigates the cognitive abilities involved in hypertext learning and design approaches that can help users. We examined the effects of two types of high-level content organizers - a graphic spatial map and an alphabetical list - on readers’ memory for hypertext structure. In the control condition, a simple “home” page with no navigational aid was offered. Subjects were asked to read the hypertext with the purpose of learning the content, but in the post test phase they also had to recall the layout of nodes and links. Memory for links and page places varied as a function of condition. When a spatial map was available participants reconstructed more accurate formal structure then in the two other conditions. Participants’ memory about page places was the least accurate in the list condition. Results also indicate that participants use the content organizer when it is available in order to orientate during learning from hypertext documents.Our results prove that a content organizer showing the formal structure can facilitate the spatial mapping process. However, an organizer exposing a different structure than the real one would generate a conflict.  相似文献   
143.
Complex systems are often designed and built from smaller pieces, called components. Components are open sub-systems meant to be combined (or composed) to form other components or closed systems. It is well known that Petri nets allow such a component based modeling, relying on parallel composition and transition synchronization. However, synchronizing transitions that carry temporal constraints does not yield a compositional method for assembling components, a highly desirable property. The paper addresses this particular problem: how to build complex systems in a compositional manner from components specified by Time Petri nets (TPN). A first solution is proposed, adequate for a particular subclass of Time Petri nets but significantly increasing the complexity of components. Then an improved solution is developed, relying on an extension of Time Petri nets with two relations added on transitions. This latter solution requires a much simpler transformation of nets, does not significantly increase their complexity, and is applicable to a larger class of TPN.  相似文献   
144.
In this paper, we present new monolithic and compositional algorithms to solve the LTL realizability problem. Those new algorithms are based on a reduction of the LTL realizability problem to a game whose winning condition is defined by a universal automaton on infinite words with a k-co-Büchi acceptance condition. This acceptance condition asks that runs visit at most k accepting states, so it implicitly defines a safety game. To obtain efficient algorithms from this construction, we need several additional ingredients. First, we study the structure of the underlying automata constructions, and we show that there exists a partial order that structures the state space of the underlying safety game. This partial order can be used to define an efficient antichain algorithm. Second, we show that the algorithm can be implemented in an incremental way by considering increasing values of k in the acceptance condition. Finally, we show that for large LTL formulas that are written as conjunctions of smaller formulas, we can solve the problem compositionally by first computing winning strategies for each conjunct that appears in the large formula. We report on the behavior of those algorithms on several benchmarks. We show that the compositional algorithms are able to handle LTL formulas that are several pages long.  相似文献   
145.
This paper targets at the problem of automatic semantic indexing of news videos by presenting a video annotation and retrieval system which is able to perform automatic semantic annotation of news video archives and provide access to the archives via these annotations. The presented system relies on the video texts as the information source and exploits several information extraction techniques on these texts to arrive at representative semantic information regarding the underlying videos. These techniques include named entity recognition, person entity extraction, coreference resolution, and semantic event extraction. Apart from the information extraction components, the proposed system also encompasses modules for news story segmentation, text extraction, and video retrieval along with a news video database to make it a full-fledged system to be employed in practical settings. The proposed system is a generic one employing a wide range of techniques to automate the semantic video indexing process and to bridge the semantic gap between what can be automatically extracted from videos and what people perceive as the video semantics. Based on the proposed system, a novel automatic semantic annotation and retrieval system is built for Turkish and evaluated on a broadcast news video collection, providing evidence for its feasibility and convenience for news videos with a satisfactory overall performance.  相似文献   
146.
In this study, we propose to evaluate the potential of Raman spectroscopy (RS) to assess renal tumours at surgery. Different classes of Raman renal spectra acquired during a clinical protocol are discriminated using support vector machines classifiers. The influence on the classification scores of various preprocessing steps generally involved in RS are also investigated and evaluated in the particular context of renal tumour characterization. Encouraging results show the interest of RS to evaluate kidney cancer and suggest the potential of this technique as a surgical assistance during partial nephrectomy.  相似文献   
147.
This work presents methods for deforming meshes in a shape-sensitive way using Moving Least Squares (MLS) optimization. It extends an approach for deforming space (Cuno et al. in Proceedings of the 27th Computer Graphics International Conference, pp. 115–122, 2007) by showing how custom distance metrics may be used to achieve deformations which preserve the overall mesh shape. Several variant formulations are discussed and demonstrated, including the use of geodesic distances, distances constrained to paths contained in the mesh, the use of skeletons, and a reformulation of the MLS scheme which makes it possible to affect the bending behavior of the deformation. Finally, aspects of the implementation of these techniques in parallel architectures such as GPUs (graphics processing units) are described and compared with CPU-only implementations.  相似文献   
148.
Digital Elevation Models (DEMs) are used to compute the hydro-geomorphological variables required by distributed hydrological models. However, the resolution of the most precise DEMs is too fine to run these models over regional watersheds. DEMs therefore need to be aggregated to coarser resolutions, affecting both the representation of the land surface and the hydrological simulations. In the present paper, six algorithms (mean, median, mode, nearest neighbour, maximum and minimum) are used to aggregate the Shuttle Radar Topography Mission (SRTM) DEM from 3″ (90 m) to 5′ (10 km) in order to simulate the water balance of the Lake Chad basin (2.5 Mkm2). Each of these methods is assessed with respect to selected hydro-geomorphological properties that influence Terrestrial Hydrology Model with Biogeochemistry (THMB) simulations, namely the drainage network, the Lake Chad bottom topography and the floodplain extent.The results show that mean and median methods produce a smoother representation of the topography. This smoothing involves the removing of the depressions governing the floodplain dynamics (floodplain area<5000 km2) but it eliminates the spikes and wells responsible for deviations regarding the drainage network. By contrast, using other aggregation methods, a rougher relief representation enables the simulation of a higher floodplain area (>14,000 km2 with the maximum or nearest neighbour) but results in anomalies concerning the drainage network. An aggregation procedure based on a variographic analysis of the SRTM data is therefore suggested. This consists of preliminary filtering of the 3″ DEM in order to smooth spikes and wells, then resampling to 5′ via the nearest neighbour method so as to preserve the representation of depressions. With the resulting DEM, the drainage network, the Lake Chad bathymetric curves and the simulated floodplain hydrology are consistent with the observations (3% underestimation for simulated evaporation volumes).  相似文献   
149.
150.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号