全文获取类型
收费全文 | 3223篇 |
免费 | 696篇 |
国内免费 | 405篇 |
专业分类
电工技术 | 137篇 |
综合类 | 272篇 |
化学工业 | 74篇 |
金属工艺 | 18篇 |
机械仪表 | 104篇 |
建筑科学 | 230篇 |
矿业工程 | 43篇 |
能源动力 | 3篇 |
轻工业 | 23篇 |
水利工程 | 12篇 |
石油天然气 | 7篇 |
武器工业 | 59篇 |
无线电 | 572篇 |
一般工业技术 | 169篇 |
冶金工业 | 53篇 |
原子能技术 | 6篇 |
自动化技术 | 2542篇 |
出版年
2024年 | 14篇 |
2023年 | 76篇 |
2022年 | 122篇 |
2021年 | 143篇 |
2020年 | 130篇 |
2019年 | 101篇 |
2018年 | 102篇 |
2017年 | 98篇 |
2016年 | 123篇 |
2015年 | 160篇 |
2014年 | 242篇 |
2013年 | 198篇 |
2012年 | 255篇 |
2011年 | 280篇 |
2010年 | 237篇 |
2009年 | 233篇 |
2008年 | 277篇 |
2007年 | 285篇 |
2006年 | 179篇 |
2005年 | 192篇 |
2004年 | 162篇 |
2003年 | 113篇 |
2002年 | 116篇 |
2001年 | 106篇 |
2000年 | 69篇 |
1999年 | 63篇 |
1998年 | 61篇 |
1997年 | 45篇 |
1996年 | 37篇 |
1995年 | 25篇 |
1994年 | 18篇 |
1993年 | 17篇 |
1992年 | 9篇 |
1991年 | 6篇 |
1990年 | 4篇 |
1989年 | 1篇 |
1988年 | 7篇 |
1987年 | 3篇 |
1986年 | 2篇 |
1985年 | 2篇 |
1984年 | 2篇 |
1983年 | 2篇 |
1982年 | 3篇 |
1981年 | 1篇 |
1980年 | 1篇 |
1979年 | 2篇 |
排序方式: 共有4324条查询结果,搜索用时 359 毫秒
1.
《Displays》2015
Multi-projector displays allow the realization of large and immersive projection environments by allowing the tiling of projections from multiple projectors. Such tiled displays require real time geometrical warping of the content that is being projected from each projector. This geometrical warping is a computationally intensive operation and is typically applied using high-end graphics processing units (GPUs) that are able to process a defined number of projector channels. Furthermore, this limits the applicability of such multi-projector display systems only to the content that is being generated using desktop based systems. In this paper we propose a platform independent FPGA based scalable hardware architecture for geometric correction of projected content that allows addition of each projector channel at a fractional increase in logic area. The proposed scheme provides real time correction of HD quality video streams and thus enables the use of this technology for embedded and standalone devices. 相似文献
2.
Greenish yellow organic light-emitting diodes (GYOLEDs) have steadily attracted researcher's attention since they are important to our life. However, their performance significantly lags behind compared with the three primary colors based OLEDs. Herein, for the first time, an ideal host-guest system has been demonstrated to accomplish high-performance phosphorescent GYOLEDs, where the guest concentration is as low as 2%. The GYOLED exhibits a forward-viewing power efficiency of 57.0 lm/W at 1000 cd/m2, which is the highest among GYOLEDs. Besides, extremely low efficiency roll-off and voltages are achieved. The origin of the high performance is unveiled and it is found that the combined mechanisms of host-guest energy transfer and direct exciton formation on the guest are effective to furnish the greenish yellow emission. Then, by dint of this ideal host-guest system, a simplified but high-performance hybrid white OLED (WOLED) has been developed. The WOLED can exhibit an ultrahigh color rendering index (CRI) of 92, a maximum total efficiency of 27.5 lm/W and a low turn-on voltage of 2.5 V (1 cd/m2), unlocking a novel avenue to simultaneously achieve simplified structure, ultrahigh CRI (>90), high efficiency and low voltage. 相似文献
3.
This study addresses the problem of choosing the most suitable probabilistic model selection criterion for unsupervised learning
of visual context of a dynamic scene using mixture models. A rectified Bayesian Information Criterion (BICr) and a Completed
Likelihood Akaike’s Information Criterion (CL-AIC) are formulated to estimate the optimal model order (complexity) for a given
visual scene. Both criteria are designed to overcome poor model selection by existing popular criteria when the data sample
size varies from small to large and the true mixture distribution kernel functions differ from the assumed ones. Extensive
experiments on learning visual context for dynamic scene modelling are carried out to demonstrate the effectiveness of BICr
and CL-AIC, compared to that of existing popular model selection criteria including BIC, AIC and Integrated Completed Likelihood
(ICL). Our study suggests that for learning visual context using a mixture model, BICr is the most appropriate criterion given
sparse data, while CL-AIC should be chosen given moderate or large data sample sizes. 相似文献
4.
一种新颖的基于非压缩数字视频的水印盲检测算法 总被引:10,自引:0,他引:10
数字水印是一种嵌入到多媒体数据中用来进行版权标识的工具。在众多的宿主媒体中,数字视频因具有隐藏容量大、透明性好、鲁棒性强等诸多优点而受到日益广泛的关注。但是很多文献中提到的视频水印都是从数据流中提取单帧图像进行处理,这类算法与静态图像的水印方法如出一辙,没有充分利用视频文件的各种特性。而且对帧平均、视频压缩等常见的运动图像攻击方法十分敏感。针对这些问题,本文以非压缩视频文件为实验对象,结合人类视觉模型和彩色图像场景分割的方法,提出并实现了一种基于视频时间轴的数字水印盲检测算法。实验结果表明算法有效实用。 相似文献
5.
DGPSL:A DISTRIBUTED GRAPHICS LIBRARY 总被引:1,自引:0,他引:1
DGPSL:ADISTRIBUTEDGRAPHICSLIBRARYShiJiaoying;PanZhigeng;ZhengWentingDGPSL:ADISTRIBUTEDGRAPHICSLIBRARY¥ShiJiaoying;PanZhigeng;... 相似文献
6.
《中国科学E辑(英文版)》2005,(Z2)
All image systems cause a blurring of the scene radiance field during image acquisi- tion. Accurate characterization of this blurring is referred to as the modulation transfer function (MTF)[1]. The MTF is a fundamental imaging system design specification and system quality metric often used in remote sensing. It results from the cumulative effects of the instrumental optics (diffraction, aberrations, focusing error), integration on a pho- tosensitive surface, charge diffusion along the arra… 相似文献
7.
Inverse surface design problems from light transport behavior specification usually represent extremely complex and costly processes, but their importance is well known. In particular, they are very interesting for lighting and luminaire design, in which it is usually difficult to test design decisions on a physical model in order to avoid costly mistakes. In this survey, we present the main ideas behind these kinds of problems, characterize them, and summarize existing work in the area, revealing problems that remain open and possible areas of further research. 相似文献
8.
9.
Roy S. Berns 《Color research and application》2007,32(4):334-335
The term “color gamut” historically has been associated with color output such as optimal color stimuli and additive and subtractive imaging systems. Recently, this term has been used with input devices such as scanners and digital cameras. It is proposed that the term “color‐gamut rendering” should be used instead of input devices. This clarifies the distinction between input (analysis) and output (synthesis) color systems in terms of the effect of an input system on defining the colorimetric properties of an output system. © 2007 Wiley Periodicals, Inc. Col Res Appl, 32, 334–335, 2007 相似文献
10.
Kevin E. Spaulding Geoffrey J. Woolfe Rajan L. Joshi 《Color research and application》2003,28(4):251-266
Image sources, such as digital camera captures and photographic negatives, typically have more information than can be reproduced on a photographic print or a video display. The information that is lost during the tone/color rendering process relates to both the extended dynamic range and color gamut of the original scene. In conventional photographic systems, most of this additional information is archived on the photographic negative and can be accessed by adjusting the way the negative is printed. However, most digital imaging systems have traditionally archived only a rendered video RGB image. As a result, it is not possible to make the same sorts of image manipulations that historically have been possible with conventional photographic systems. This suggests that there would be an advantage to storing images using an extended dynamic range/color gamut color encoding. However, because of file compatibility issues, digital imaging systems that store images using color encoding other than a standard video RGB representation (e.g., sRGB) would be significantly disadvantaged in the marketplace. In this article, we describe a solution that has been developed to maintain compatibility with existing file formats and software applications, while simultaneously retaining the extended dynamic range and color gamut information associated with the original scenes. With this approach, the input raw digital camera image or film scan is first transformed to the scene‐referred ERIMM RGB color encoding. Next, a rendered sRGB image is formed in the usual way and stored in a conventional image file (e.g., a standard JPEG file). A residual image representing the difference between the original extended dynamic range image and the final rendered image is formed and stored in the image file using proprietary metadata tags. This provides a mechanism for archiving the extended dynamic range/color gamut information, which is normally discarded during the rendering process, without sacrificing interoperability. Appropriately enabled applications can decode the residual image metadata and use it to reconstruct the ERIMM RGB image, whereas applications that are not aware of the metadata will ignore it and only have access to the sRGB image. The residual image is formed such that it will have negligible pixel values for those portions of the image that lie within the sRGB gamut, and will therefore be highly compressible. Tests on a population of 950 real customer images have demonstrated that the extended dynamic range scene information can be stored with an average file size overhead of about 8% compared to the sRGB images alone. © 2003 Wiley Periodicals, Inc. Col Res Appl, 28, 251–266, 2003; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/col.10160 相似文献