首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2280篇
  免费   151篇
  国内免费   2篇
电工技术   47篇
综合类   6篇
化学工业   665篇
金属工艺   60篇
机械仪表   60篇
建筑科学   147篇
矿业工程   2篇
能源动力   50篇
轻工业   164篇
水利工程   6篇
石油天然气   2篇
无线电   187篇
一般工业技术   488篇
冶金工业   151篇
原子能技术   10篇
自动化技术   388篇
  2022年   23篇
  2021年   48篇
  2020年   33篇
  2019年   44篇
  2018年   50篇
  2017年   66篇
  2016年   67篇
  2015年   76篇
  2014年   88篇
  2013年   131篇
  2012年   111篇
  2011年   165篇
  2010年   103篇
  2009年   105篇
  2008年   123篇
  2007年   110篇
  2006年   101篇
  2005年   79篇
  2004年   59篇
  2003年   63篇
  2002年   42篇
  2001年   41篇
  2000年   43篇
  1999年   40篇
  1998年   50篇
  1997年   41篇
  1996年   42篇
  1995年   33篇
  1994年   21篇
  1993年   30篇
  1992年   23篇
  1991年   21篇
  1990年   17篇
  1989年   22篇
  1988年   18篇
  1987年   12篇
  1986年   25篇
  1985年   20篇
  1984年   20篇
  1983年   15篇
  1982年   13篇
  1981年   18篇
  1980年   13篇
  1979年   19篇
  1978年   20篇
  1977年   17篇
  1976年   15篇
  1974年   12篇
  1973年   11篇
  1970年   10篇
排序方式: 共有2433条查询结果,搜索用时 15 毫秒
991.
992.
We show that for arbitrary positive integers with probability the gcd of two linear combinations of these integers with rather small random integer coefficients coincides with This naturally leads to a probabilistic algorithm for computing the gcd of several integers, with probability via just one gcd of two numbers with about the same size as the initial data (namely the above linear combinations). This algorithm can be repeated to achieve any desired confidence level.  相似文献   
993.
Commonly known detail in context techniques for the two-dimensional Euclidean space enlarge details and shrink their context using mapping functions that introduce geometrical compression. This makes it difficult or even impossible to recognize shapes for large differences in magnification factors. In this paper we propose to use the complex logarithm and the complex root functions to show very small details even in very large contexts. These mappings are conformal, which means they only locally rotate and scale, thus keeping shapes intact and recognizable. They allow showing details that are orders of magnitude smaller than their surroundings in combination with their context in one seamless visualization. We address the utilization of this universal technique for the interaction with complex two-dimensional data considering the exploration of large graphs and other examples  相似文献   
994.
Recent advances in algorithms and graphics hardware have opened the possibility to render tetrahedral grids at interactive rates on commodity PCs. This paper extends on this work in that it presents a direct volume rendering method for such grids which supports both current and upcoming graphics hardware architectures, large and deformable grids, as well as different rendering options. At the core of our method is the idea to perform the sampling of tetrahedral elements along the view rays entirely in local barycentric coordinates. Then, sampling requires minimum GPU memory and texture access operations, and it maps efficiently onto a feed-forward pipeline of multiple stages performing computation and geometry construction. We propose to spawn rendered elements from one single vertex. This makes the method amenable to upcoming Direct3D 10 graphics hardware which allows to create geometry on the GPU. By only modifying the algorithm slightly it can be used to render per-pixel iso-surfaces and to perform tetrahedral cell projection. As our method neither requires any pre-processing nor an intermediate grid representation it can efficiently deal with dynamic and large 3D meshes.  相似文献   
995.
This paper introduces a uniform statistical framework for both 3-D and 2-D object recognition using intensity images as input data. The theoretical part provides a mathematical tool for stochastic modeling. The algorithmic part introduces methods for automatic model generation, localization, and recognition of objects. 2-D images are used for learning the statistical appearance of 3-D objects; both the depth information and the matching between image and model features are missing for model generation. The implied incomplete data estimation problem is solved by the Expectation Maximization algorithm. This leads to a novel class of algorithms for automatic model generation from projections. The estimation of pose parameters corresponds to a non-linear maximum likelihood estimation problem which is solved by a global optimization procedure. Classification is done by the Bayesian decision rule. This work includes the experimental evaluation of the various facets of the presented approach. An empirical evaluation of learning algorithms and the comparison of different pose estimation algorithms show the feasibility of the proposed probabilistic framework.  相似文献   
996.
We study a resource allocation problem where jobs have the following characteristic: each job consumes some quantity of a bounded resource during a certain time interval and induces a given profit. We aim to select a subset of jobs with maximal total profit such that the total resource consumed at any point in time remains bounded by the given resource capacity.While this case is trivially NP-hard in general and polynomially solvable for uniform resource consumptions, our main result shows the NP-hardness for the case of general resource consumptions but uniform profit values, i.e. for the case of maximizing the number of performed jobs. This result applies even for proper time intervals.We also give a deterministic (1/2−ε)-approximation algorithm for the general problem on proper intervals improving upon the currently known 1/3 ratio for general intervals. Finally, we study the worst-case performance ratio of a simple greedy algorithm.  相似文献   
997.
Effective river restoration aims for the recovery of ecosystem functions by restoring processes and connectivity to the floodplain. At the straightened lowland river Stör in northern Germany, a sequence of 15 new meanders was created in 2008, with wavelengths up to 70 m. The newly created areas within the meander bends range in size from 215 to 1,115 m2 and function as a series of 15 restored floodplain sites, which are subject to succession. After 7 years of restoration measures, we investigated the vegetation dynamics on the (a) restored floodplains and compared them with adjacent floodplain sites that were used as (b) low‐intensity grazed grassland or as (c) abandoned grassland. We analysed the species diversity, functional vegetation parameters, and plant communities of 200 plots within the floodplain area of the three floodplain types and of 246 plots at their river banks. Plant species diversity and composition differed with respect to restoration measure and site management. Restored floodplains revealed a higher coverage in species of wet grasslands and softwood forests and higher species diversity than abandoned grasslands. Grazed grasslands showed the highest species number and coverages of pioneer vegetation. The banks indicated fewer differences in species composition between floodplain types. The construction of restored floodplains revealed greater overall plant diversity due to promoting the development of typical floodplain vegetation. Shallow meanders with increased flooding intensity and the creation of a varying microreliefs are recommended as combined river/floodplain measures in order to foster processes and connectivity between valley components.  相似文献   
998.
The advent of the social web brought with it challenges and opportunities for research on learning and knowledge construction. Using the online-encyclopedia Wikipedia as an example, we discuss several methods that can be applied to analyze the dynamic nature of knowledge-related processes in mass collaboration environments. These methods can help in the analysis of the interactions between the two levels that are relevant in computer-supported collaborative learning (CSCL) research: The individual level of learners and the collective level of the group or community. In line with constructivist theories of learning, we argue that the development of knowledge on both levels is triggered by productive friction, that is, the prolific resolution of socio-cognitive conflicts. By describing three prototypical methods that have been used in previous Wikipedia research, we review how these techniques can be used to examine the dynamics on both levels and analyze how these dynamics can be predicted by the amount of productive friction. We illustrate how these studies make use of text classifiers, social network analysis, and cluster analysis in order to operationalize the theoretical concepts. We conclude by discussing implications for the analysis of dynamic knowledge processes from a learning sciences perspective.  相似文献   
999.
Online communities are attractive sources of ideas relevant for new product development and innovation. However, making sense of the ‘big data’ in these communities is a complex analytical task. A systematic way of dealing with these data is needed to exploit their potential for boosting companies' innovation performance. We propose a method for analysing online community data with a special focus on identifying ideas. We employ a research design where two human raters classified 3,000 texts extracted from an online community, according to whether the text contained an idea. Among the 3,000, 137 idea texts and 2,666 non‐idea texts were identified. The human raters could not agree on the remaining 197 texts. These texts were omitted from the analysis. The remaining 2,803 texts were processed by using text mining techniques and used to train a classification model. We describe how to tune the model and which text mining steps to perform. We conclude that machine learning and text mining can be useful for detecting ideas in online communities. The method can help researchers and firms identify ideas hidden in large amounts of texts. Also, it is interesting in its own right that machine learning can be used to detect ideas.  相似文献   
1000.
We present an empirical analysis of a web forum in which followers of a health-related community exchange information and opinions in order to pass on and develop relevant knowledge. We analyzed how knowledge construction takes place in such a community that represents an outsider position which is not accepted by majority society. For this purpose we applied the Community of Practice (CoP) concept as a guideline for our analysis and found that many well-known activities of CoPs were true of the Urkost community. The social network analysis findings also supported interpreting this community as a CoP. But we found as well that this community had a variety of structural characteristics that the CoP literature deals with insufficiently. We identified the structure of goals, roles, and communication as relevant features that are typical of this outsider CoP. For example, the attitude of the core members towards people of a ‘different faith’ was characterized by strong hostility and rejection. These features provided an effective basis for the development and maintenance of a shared identity in the community. Our findings are discussed against the background of the necessity for further development of the CoP concept.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号