首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   921篇
  免费   52篇
  国内免费   60篇
电工技术   8篇
综合类   83篇
化学工业   7篇
金属工艺   38篇
机械仪表   164篇
建筑科学   1篇
矿业工程   11篇
能源动力   1篇
轻工业   19篇
石油天然气   1篇
武器工业   4篇
无线电   61篇
一般工业技术   27篇
冶金工业   12篇
自动化技术   596篇
  2024年   1篇
  2023年   3篇
  2022年   9篇
  2021年   7篇
  2020年   14篇
  2019年   11篇
  2018年   9篇
  2017年   9篇
  2016年   16篇
  2015年   27篇
  2014年   29篇
  2013年   41篇
  2012年   48篇
  2011年   58篇
  2010年   54篇
  2009年   69篇
  2008年   65篇
  2007年   72篇
  2006年   66篇
  2005年   79篇
  2004年   57篇
  2003年   34篇
  2002年   52篇
  2001年   43篇
  2000年   21篇
  1999年   26篇
  1998年   29篇
  1997年   16篇
  1996年   17篇
  1995年   18篇
  1994年   6篇
  1993年   2篇
  1992年   7篇
  1991年   4篇
  1990年   1篇
  1989年   4篇
  1986年   2篇
  1985年   1篇
  1983年   1篇
  1982年   1篇
  1981年   1篇
  1978年   1篇
  1963年   1篇
  1962年   1篇
排序方式: 共有1033条查询结果,搜索用时 31 毫秒
1.
The multi-index hashing (MIH) is the state-of-the-art method for indexing binary codes. However, it is based on the dataset codes uniform distribution assumption, and will lower efficiency in dealing with non-uniformly distributed codes. In this paper, we propose a data-oriented multi-index hashing method. We first compute the correlations between bits and learn adaptive projection vector for each binary substring. Then, instead of using substrings as direct indices into hash tables, we project them with corresponding projection vectors to generate new indices. With adaptive projection, the indices in each hash table are nearly uniformly distributed. Besides, we put forward an entropy based measurement to evaluate the distribution of data items in each hash table. Experiments conducted on reference large scale datasets show that compared to the MIH the time performance of our method can be 36.9%~87.4% better .  相似文献   
2.
An improved parallel adaptive indexing algorithm on multi-core CPUs is proposed to solve the problems that the parallel adaptive indexing algorithms cannot take full advantage of the CMP's parallel execution resource, and properly process the sequential query pattern. Based on the optimization of the Refined Partition Merge algorithm, our improved parallel adaptive indexing algorithm combines the Parallel Database Cracking method with the Refined Partition Merge algorithm. In our algorithm, when fewer data chunks are in the index, we use the optimized Refined Partition Merge algorithm so as to reduce the probability of conflict between threads, decrease the waiting time, and increase the utilization of the threads, and when more data chunks are in the index, we use the Parallel Database Cracking method so as to take full advantage of the CMP's parallel execution resources. Besides, we propose an optimization for the robustness, which makes our algorithm suitable for two common query patterns. Experiments show that our method can reduce the query time by 25.7%~33.2%, and suit with common query patterns.  相似文献   
3.
We investigate an automated identification of weak signals according to Ansoff to improve strategic planning and technological forecasting. Literature shows that weak signals can be found in the organization’s environment and that they appear in different contexts. We use internet information to represent organization’s environment and we select these websites that are related to a given hypothesis. In contrast to related research, a methodology is provided that uses latent semantic indexing (LSI) for the identification of weak signals. This improves existing knowledge based approaches because LSI considers the aspects of meaning and thus, it is able to identify similar textual patterns in different contexts. A new weak signal maximization approach is introduced that replaces the commonly used prediction modeling approach in LSI. It enables to calculate the largest number of relevant weak signals represented by singular value decomposition (SVD) dimensions. A case study identifies and analyses weak signals to predict trends in the field of on-site medical oxygen production. This supports the planning of research and development (R&D) for a medical oxygen supplier. As a result, it is shown that the proposed methodology enables organizations to identify weak signals from the internet for a given hypothesis. This helps strategic planners to react ahead of time.  相似文献   
4.
Cross impact analysis (CIA) consists of a set of related methodologies that predict the occurrence probability of a specific event and that also predict the conditional probability of a first event given a second event. The conditional probability can be interpreted as the impact of the second event on the first. Most of the CIA methodologies are qualitative that means the occurrence and conditional probabilities are calculated based on estimations of human experts. In recent years, an increased number of quantitative methodologies can be seen that use a large number of data from databases and the internet. Nearly 80% of all data available in the internet are textual information and thus, knowledge structure based approaches on textual information for calculating the conditional probabilities are proposed in literature. In contrast to related methodologies, this work proposes a new quantitative CIA methodology to predict the conditional probability based on the semantic structure of given textual information. Latent semantic indexing is used to identify the hidden semantic patterns standing behind an event and to calculate the impact of the patterns on other semantic textual patterns representing a different event. This enables to calculate the conditional probabilities semantically. A case study shows that this semantic approach can be used to predict the conditional probability of a technology on a different technology.  相似文献   
5.
Little work has been reported in the literature to support k-nearest neighbor (k-NN) searches/queries in hybrid data spaces (HDS). An HDS is composed of a combination of continuous and non-ordered discrete dimensions. This combination presents new challenges in data organization and search ordering. In this paper, we present an algorithm for k-NN searches using a multidimensional index structure in hybrid data spaces. We examine the concept of search stages and use the properties of an HDS to derive a new search heuristic that greatly reduces the number of disk accesses in the initial stage of searching. Further, we present a performance model for our algorithm that estimates the cost of performing such searches. Our experimental results demonstrate the effectiveness of our algorithm and the accuracy of our performance estimation model.  相似文献   
6.
7.
Some approximate indexing schemes have been recently proposed in metric spaces which sort the objects in the database according to pseudo-scores. It is known that (1) some of them provide a very good trade-off between response time and accuracy, and (2) probability-based pseudo-scores can provide an optimal trade-off in range queries if the probabilities are correctly estimated. Based on these facts, we propose a probabilistic enhancement scheme which can be applied to any pseudo-score based scheme. Our scheme computes probability-based pseudo-scores using pseudo-scores obtained from a pseudo-score based scheme. In order to estimate the probability-based pseudo-scores, we use the object-specific parameters in logistic regression and learn the parameters using MAP (Maximum a Posteriori) estimation and the empirical Bayes method. We also propose a technique which speeds up learning the parameters using pseudo-scores. We applied our scheme to the two state-of-the-art schemes: the standard pivot-based scheme and the permutation-based scheme, and evaluated them using various kinds of datasets from the Metric Space Library. The results showed that our scheme outperformed the conventional schemes, with regard to both the number of distance computations and the CPU time, in all the datasets.  相似文献   
8.
介绍了某厂滚齿机、插齿机、数控回转工作台等产品的关键部件——分度蜗杆副的制造,系统地分析了从设计图样、工艺路线、加工设备直至刀具、检验测量各个环节存在的问题,提出了攻关方案并付诸实施,取得了很好的成绩,制造精度由6~5级提高至4级。  相似文献   
9.
基于内容的图像检索(CBIR)技术使从海量图像资源中快速高效地提取有价值的信息得以实现,采用局部特征来表示图像并在此基础上进行图像相似性检索是当前的热门研究课题。文中将图像高维局部不变特征提取算法和LSH索引算法应用到基于内容的图像检索系统中,实验结果表明了该方法的有效性。  相似文献   
10.
通过CAD软件的二次开发技术模拟多个铣刀同时铣削弧面分度凸轮的加工过程,给出一种孤面分度凸轮计算机辅助设计的建模方法.实例证明该种方法是正确可行的,并且大大简化弧面分度凸轮的建模过程,同时为建立复杂工作轮廓曲面凸轮的通用CAD系统提供依据.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号