首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   857篇
  免费   68篇
电工技术   5篇
综合类   1篇
化学工业   166篇
金属工艺   20篇
机械仪表   18篇
建筑科学   27篇
矿业工程   3篇
能源动力   15篇
轻工业   236篇
水利工程   5篇
石油天然气   1篇
无线电   46篇
一般工业技术   165篇
冶金工业   88篇
原子能技术   2篇
自动化技术   127篇
  2023年   6篇
  2022年   9篇
  2021年   24篇
  2020年   16篇
  2019年   20篇
  2018年   41篇
  2017年   34篇
  2016年   44篇
  2015年   32篇
  2014年   43篇
  2013年   78篇
  2012年   62篇
  2011年   43篇
  2010年   44篇
  2009年   40篇
  2008年   32篇
  2007年   28篇
  2006年   20篇
  2005年   14篇
  2004年   11篇
  2003年   6篇
  2002年   7篇
  2001年   7篇
  1999年   10篇
  1998年   21篇
  1997年   13篇
  1996年   9篇
  1995年   7篇
  1994年   5篇
  1993年   6篇
  1989年   5篇
  1982年   10篇
  1981年   4篇
  1980年   6篇
  1979年   4篇
  1977年   12篇
  1976年   5篇
  1973年   15篇
  1972年   4篇
  1970年   7篇
  1969年   4篇
  1968年   5篇
  1967年   5篇
  1954年   4篇
  1932年   6篇
  1931年   9篇
  1930年   8篇
  1928年   7篇
  1927年   8篇
  1926年   8篇
排序方式: 共有925条查询结果,搜索用时 15 毫秒
841.
842.
Image population analysis is the class of statistical methods that plays a central role in understanding the development, evolution, and disease of a population. However, these techniques often require excessive computational power and memory that are compounded with a large number of volumetric inputs. Restricted access to supercomputing power limits its influence in general research and practical applications. In this paper we introduce ISP, an Image-Set Processing streaming framework that harnesses the processing power of commodity heterogeneous CPU/GPU systems and attempts to solve this computational problem. In ISP, we introduce specially designed streaming algorithms and data structures that provide an optimal solution for out-of-core multiimage processing problems both in terms of memory usage and computational efficiency. ISP makes use of the asynchronous execution mechanism supported by parallel heterogeneous systems to efficiently hide the inherent latency of the processing pipeline of out-of-core approaches. Consequently, with computationally intensive problems, the ISP out-of-core solution can achieve the same performance as the in-core solution. We demonstrate the efficiency of the ISP framework on synthetic and real datasets.  相似文献   
843.
This paper presents a complete system able to categorize handwritten documents, i.e. to classify documents according to their topic. The categorization approach is based on the detection of some discriminative keywords prior to the use of the well-known tf-idf representation for document categorization. Two keyword extraction strategies are explored. The first one proceeds to the recognition of the whole document. However, the performance of this strategy strongly decreases when the lexicon size increases. The second strategy only extracts the discriminative keywords in the handwritten documents. This information extraction strategy relies on the integration of a rejection model (or anti-lexicon model) in the recognition system. Experiments have been carried out on an unconstrained handwritten document database coming from an industrial application concerning the processing of incoming mails. Results show that the discriminative keyword extraction system leads to better recall/precision tradeoffs than the full recognition strategy. The keyword extraction strategy also outperforms the full recognition strategy for the categorization task.  相似文献   
844.
Modeling and transforming have always been the cornerstones of software system development, albeit often investigated by different research communities. Modeling addresses how information is represented and processed, while transformation cares about what the results of processing this information are. To address the growing complexity of software systems, model-driven engineering (MDE) leverages domain-specific languages to define abstract models of systems and automated methods to process them. Meanwhile, compiler technology mostly concentrates on advanced techniques and tools for program transformation. For this, it has developed complex analyses and transformations (from lexical and syntactic to semantic analyses, down to platform-specific optimizations). These two communities appear today quite complementary and are starting to meet again in the software language engineering (SLE) field. SLE addresses all the stages of a software language lifecycle, from its definition to its tooling. In this article, we show how SLE can lean on the expertise of both MDE and compiler research communities and how each community can bring its solutions to the other one. We then draw a picture of the current state of SLE and of the challenges it has still to face.  相似文献   
845.
Abstract— Memory colors refer to the color of specific image regions that have the essential attribute of being perceived in a consistent manner by human observers. In color correction — or rendering — tasks, this consistency implies that they have to be faithfully reproduced; their importance, in that respect, is greater than that for other regions in an image. There are various schemes and attributes to detect memory colors, but the preferred method remains to segment the images into meaningful regions, a task for which many algorithms exist. Memory‐color regions are not, however, similar in their attributes. Significant variations in shape, size, and texture exist. As such, it is unclear whether a single segmentation algorithm is the most adapted for all of these classes. By using a large database of real‐world images, class‐specific geometrical features, eigenregions, were calculated. They can be used to evaluate how well an algorithm is adapted to segment a given class. A measure of localization of memory colors is given. The performance of class‐specific eigenregions were compared to general ones in the task of memory‐color‐region classification and it was observed that they provide a noticeable improvement in classification rates.  相似文献   
846.
Streaming simplification of tetrahedral meshes   总被引:1,自引:0,他引:1  
Unstructured tetrahedral meshes are commonly used in scientific computing to represent scalar, vector, and tensor fields in three dimensions. Visualization of these meshes can be difficult to perform interactively due to their size and complexity. By reducing the size of the data, we can accomplish real-time visualization necessary for scientific analysis. We propose a two-step approach for streaming simplification of large tetrahedral meshes. Our algorithm arranges the data on disk in a streaming, I/O-efficient format that allows coherent access to the tetrahedral cells. A quadric-based simplification is sequentially performed on small portions of the mesh in-core. Our output is a coherent streaming mesh which facilitates future processing. Our technique is fast, produces high quality approximations, and operates out-of-core to process meshes too large for main memory  相似文献   
847.
Despite advances in the quality and availability of information and communication technologies (ICT), the level of access and skill in using these resources remains unequal. This is particularly evident in developing countries. To reduce the gap between these levels of ICT use, organizations (both public and private) invest in the expansion of infrastructure by providing ICT access and ICT training. In this study, we demonstrate that there is a gap in current approaches to monitor and evaluate large-scale ICT training interventions. Thus, we propose an approach based on social network analysis and data mining techniques, and apply it to an online training program conducted in different regions of Brazil. The results allow us to examine different aspects of the program, such as the region of participants, the institutions driving the intervention, local indicators of telecommunications infrastructure, and local socioeconomic conditions.  相似文献   
848.
There is a worldwide demand for decentralized wastewater treatment options. An on-site engineered ecosystem (EE) treatment plant was designed with a multistage approach for small wastewater generators in tropical areas. The array of treatment units included a septic tank, a submersed aerated filter, and a secondary decanter followed by three vegetated tanks containing aquatic macrophytes intercalated with one tank of algae. During 11 months of operation with a flow rate of 52 L h(-1), the system removed on average 93.2% and 92.9% of the chemical oxygen demand (COD) and volatile suspended solids (VSS) reaching final concentrations of 36.3 ± 12.7 and 13.7 ± 4.2 mg L(-1), respectively. Regarding ammonia-N (NH(4)-N) and total phosphorus (TP), the system removed on average 69.8% and 54.5% with final concentrations of 18.8 ± 9.3 and 14.0 ± 2.5 mg L(-1), respectively. The tanks with algae and macrophytes together contributed to the overall nutrient removal with 33.6% for NH(4)-N and 26.4% for TP. The final concentrations for all parameters except TP met the discharge threshold limits established by Brazilian and EU legislation. The EE was considered appropriate for the purpose for which it was created.  相似文献   
849.
In recent years, spatial data infrastructures (SDIs) have gained great popularity as a solution to facilitate interoperable access to geospatial data offered by different agencies. In order to enhance the data retrieval process, current infrastructures usually offer a catalog service. Nevertheless, such catalog services still have important limitations that make it difficult for users to find the geospatial data that they are interested in. Some current catalog drawbacks include the use of a single record to describe all the feature types offered by a service, the lack of formal means to describe the semantics of the underlying data, and the lack of an effective ranking metric to organize the results retrieved from a query. Aiming to overcome these limitations, this article proposes SESDI (Semantically-Enabled Spatial Data Infrastructures), which is framework that reuses techniques of classic information retrieval to improve geographic data retrieval in a SDI. Moreover, the framework proposes several ranking metrics to solve spatial, semantic, temporal and multidimensional queries.  相似文献   
850.
Maximizing the singularity-free workspace of parallel manipulators is highly desirable in a context of robot design. So far, no work has been found to address the maximal singularity-free orientation workspace over a position region. In practice, this type of workspace is interesting because a mechanism often works in a range of positions. This work focuses on the Gough–Stewart platform. An optimal position at which the mechanism holds the maximal singularity-free orientation workspace is determined. This optimal position lies on the line which is perpendicular to the base and passes the centroid of the base. Considering the symmetry, a parallelepiped with centre at the determined optimal position could be an interesting working position region for the Gough–Stewart platform. Two algorithms are presented to compute the maximal singularity-free orientation workspace over such an interesting position region. An example is provided for demonstration.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号