首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   150765篇
  免费   6779篇
  国内免费   3368篇
电工技术   4672篇
技术理论   10篇
综合类   6802篇
化学工业   22781篇
金属工艺   8282篇
机械仪表   6927篇
建筑科学   7486篇
矿业工程   2382篇
能源动力   3008篇
轻工业   8798篇
水利工程   2563篇
石油天然气   3991篇
武器工业   543篇
无线电   17319篇
一般工业技术   24461篇
冶金工业   5889篇
原子能技术   980篇
自动化技术   34018篇
  2024年   247篇
  2023年   1114篇
  2022年   2080篇
  2021年   2867篇
  2020年   2223篇
  2019年   1762篇
  2018年   16137篇
  2017年   15212篇
  2016年   11805篇
  2015年   3234篇
  2014年   3558篇
  2013年   4116篇
  2012年   7438篇
  2011年   14019篇
  2010年   12067篇
  2009年   9181篇
  2008年   10380篇
  2007年   11127篇
  2006年   3423篇
  2005年   4086篇
  2004年   3267篇
  2003年   3056篇
  2002年   2271篇
  2001年   1656篇
  2000年   1882篇
  1999年   2081篇
  1998年   1770篇
  1997年   1434篇
  1996年   1434篇
  1995年   1163篇
  1994年   1018篇
  1993年   715篇
  1992年   559篇
  1991年   417篇
  1990年   341篇
  1989年   306篇
  1988年   229篇
  1987年   144篇
  1986年   109篇
  1985年   76篇
  1984年   56篇
  1983年   43篇
  1982年   50篇
  1968年   45篇
  1966年   44篇
  1965年   48篇
  1958年   38篇
  1957年   36篇
  1955年   63篇
  1954年   68篇
排序方式: 共有10000条查询结果,搜索用时 31 毫秒
991.
Many video service sites headed by YouTube know what content requires copyright protection. However, they lack a copyright protection system that automatically distinguishes whether uploaded videos contain legal or illegal content. Existing protection techniques use content-based retrieval methods that compare the features of video. However, if the video encoding has changed in resolution, bit-rate or codec, these techniques do not perform well. Thus, this paper proposes a novel video matching algorithm even if the type of encoding has changed. We also suggest an intelligent copyright protection system using the proposed algorithm. This can serve to automatically prevent the uploading of illegal content. The proposed method has represented the accuracy of 97% with searching algorithm in video-matching experiments and 98.62% with automation algorithm in copyright-protection experiments. Therefore, this system could form a core technology that identifies illegal content and automatically excludes access to illegal content by many video service sites.  相似文献   
992.
993.
In this paper we have proposed a dynamic pricing scheme for the contributing peers in the Video on Demand (VoD) system. The scheme provides an effective mechanism to maximize the profit through the residual resources of the contributing peers. A utilization function is executed for each contributing peer to estimate the utility factor based on the parameters such as initial setup cost, holding cost, chaining cost and salvage cost. In this paper, we urge an effective dynamic pricing algorithm that efficiently utilizes a range of parameters with a varying degree of complexity. The key findings of the algorithm are (i) each contributing peers are benefitted by the monetary based on its resource contributions to the VoD system and (ii) a high degree of social optimum is established by proficiently aggregating the contributing peer’s resources with the overall resources of the VoD system. We validate our claim by simulating the proposed dynamic pricing scheme with other standard pricing schemes such as altruism, cost model and game theory perspective. The result of our dynamic pricing scheme shows the best utility factor than other standard pricing schemes.  相似文献   
994.
Most algorithms of smoothing schedule compute the required bit rate of video transmission to satisfy all the transmitted data. In this paper, our proposed tolerable data dropping algorithm can adjust transmitting data to fit available bit rate. MPEG-4 with fine grained scalability (FGS) can support partial data dropping to adapt to available bandwidth network. The algorithm is based on the minimum variance bandwidth allocation (MVBA) algorithm proposed by Salehi et al. to compute the bit rate such that still ensuring that the buffer never underflows and overflows for MPEG-4 FGS streams under the limited bandwidth resource. We prove that our proposed algorithm, named MVBADP, is smoother than the MVBA algorithm. The experimental results show the peak rate, the number of rate changes, and the ratio of total dropping data, and the PSNR for four test sequences with different content characteristics. They are varied by buffer sizes and tolerable dropping ratios. We found that the MVBADP algorithm can reduce the peak rate and the number of changes when the transmitted data are dropped by tolerable dropping ratio, especially on the video sequences with the high motion and complex texture characteristic and larger size change of the consecutive frame.  相似文献   
995.
This paper presents a 2D to 3D conversion scheme to generate a 3D human model using a single depth image with several color images. In building a complete 3D model, no prior knowledge such as a pre-computed scene structure and photometric and geometric calibrations is required since the depth camera can directly acquire the calibrated geometric and color information in real time. The proposed method deals with a self-occlusion problem which often occurs in images captured by a monocular camera. When an image is obtained from a fixed view, it may not have data for a certain part of an object due to occlusion. The proposed method consists of following steps to resolve this problem. First, the noise in a depth image is reduced by using a series of image processing techniques. Second, a 3D mesh surface is constructed using the proposed depth image-based modeling method. Third, the occlusion problem is resolved by removing the unwanted triangles in the occlusion region and filling the corresponding hole. Finally, textures are extracted and mapped to the 3D surface of the model to provide photo-realistic appearance. Comparison results with the related work demonstrate the efficiency of our method in terms of visual quality and computation time. It can be utilized in creating 3D human models in many 3D applications.  相似文献   
996.
In this article we overview the design and implementation of the second generation of Kansas Lava. Driven by the needs and experiences of implementing telemetry decoders and other circuits, we have made a number of improvements to both the external API and the internal representations used. We have retained our dual shallow/deep representation of signals in general, but now have a number of externally visible abstractions for combinatorial and sequential circuits, and enabled signals. We introduce these abstractions, as well as our abstractions for reading and writing memory. Internally, we found the need to represent unknown values inside our circuits, so we made aggressive use of associated type families to lift our values to allow unknowns, in a principled and regular way. We discuss this design decision, how it unfortunately complicates the internals of Kansas Lava, and how we mitigate this complexity. Finally, when connecting Kansas Lava to the real world, the standardized idiom of using named input and output ports is provided by Kansas Lava using a new monad, called Fabric. We present the design of this Fabric monad, and illustrate its use in a small but complete example.  相似文献   
997.
In this paper, we propose a set of automatic stress exaggeration methods that can enlarge the differences between stressed and unstressed syllables. Our stress exaggeration methods can be used in computer-aided language learning systems to assist second language learners perceive stress patterns. The intention of our automatic stress exaggeration methods is to support hyper-pronunciation training which is commonly used in classrooms by teachers. In hyper-pronunciation training, exaggeration is used to help learners increase their awareness of acoustic features and effectively apply these features into their pronunciation. Duration, pitch and intensity have been claimed to be the main acoustic features that are closely related to stress in English language. Thus, four stress exaggeration methods are proposed in this paper: (i) duration-based stress exaggeration, (ii) pitch-based stress exaggeration, (iii) intensity-based stress exaggeration, and (iv) a combined stress exaggeration method that integrates the duration-based, pitch-based and intensity-based exaggeration methods. Our perceptual experimental results show that resynthesised stimuli by our proposed stress exaggerated methods can help learners of English as a Second Language (ESL) better perceive English stress patterns significantly.  相似文献   
998.
The Web has evolved into a dominant digital medium for conducting many types of online transactions such as shopping, paying bills, making travel plans, etc. Such transactions typically involve a number of steps spanning several Web pages. For sighted users these steps are relatively straightforward to do with graphical Web browsers. But they pose tremendous challenges for visually impaired individuals. This is because screen readers, the dominant assistive technology used by visually impaired users, function by speaking out the screen’s content serially. Consequently, using them for conducting transactions can cause considerable information overload. But usually one needs to browse only a small fragment of a Web page to do a step of a transaction (e.g., choosing an item from a search results list). Based on this observation this paper presents a model-directed transaction framework to identify, extract and aurally render only the “relevant” page fragments in each step of a transaction. The framework uses a process model to encode the state of the transaction and a concept model to identify the page fragments relevant for the transaction in that state. We also present algorithms to mine such models from click stream data generated by transactions and experimental evidence of the practical effectiveness of our models in improving user experience when conducting online transactions with non-visual modalities.  相似文献   
999.
1000.
SimRank has become an important similarity measure to rank web documents based on a graph model on hyperlinks. The existing approaches for conducting SimRank computation adopt an iteration paradigm. The most efficient deterministic technique yields O(n3)O\left(n^3\right) worst-case time per iteration with the space requirement O(n2)O\left(n^2\right), where n is the number of nodes (web documents). In this paper, we propose novel optimization techniques such that each iteration takes O (min{ n ·m , nr })O \left(\min \left\{ n \cdot m , n^r \right\}\right) time and O ( n + m )O \left( n + m \right) space, where m is the number of edges in a web-graph model and r ≤ log2 7. In addition, we extend the similarity transition matrix to prevent random surfers getting stuck, and devise a pruning technique to eliminate impractical similarities for each iteration. Moreover, we also develop a reordering technique combined with an over-relaxation method, not only speeding up the convergence rate of the existing techniques, but achieving I/O efficiency as well. We conduct extensive experiments on both synthetic and real data sets to demonstrate the efficiency and effectiveness of our iteration techniques.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号