首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   105篇
  免费   4篇
电工技术   1篇
化学工业   10篇
建筑科学   2篇
能源动力   7篇
轻工业   4篇
石油天然气   1篇
无线电   10篇
一般工业技术   18篇
冶金工业   2篇
自动化技术   54篇
  2024年   1篇
  2023年   1篇
  2022年   4篇
  2021年   7篇
  2020年   7篇
  2019年   3篇
  2018年   1篇
  2017年   2篇
  2016年   2篇
  2015年   2篇
  2014年   5篇
  2013年   9篇
  2012年   5篇
  2011年   5篇
  2010年   4篇
  2009年   3篇
  2008年   2篇
  2007年   3篇
  2006年   7篇
  2005年   4篇
  2004年   4篇
  2003年   6篇
  2002年   1篇
  2001年   3篇
  1999年   1篇
  1998年   5篇
  1997年   2篇
  1996年   2篇
  1995年   2篇
  1994年   3篇
  1990年   2篇
  1989年   1篇
排序方式: 共有109条查询结果,搜索用时 156 毫秒
1.
Exploring spatial datasets with histograms   总被引:2,自引:0,他引:2  
As online spatial datasets grow both in number and sophistication, it becomes increasingly difficult for users to decide whether a dataset is suitable for their tasks, especially when they do not have prior knowledge of the dataset. In this paper, we propose browsing as an effective and efficient way to explore the content of a spatial dataset. Browsing allows users to view the size of a result set before evaluating the query at the database, thereby avoiding zero-hit/mega-hit queries and saving time and resources. Although the underlying technique supporting browsing is similar to range query aggregation and selectivity estimation, spatial dataset browsing poses some unique challenges. In this paper, we identify a set of spatial relations that need to be supported in browsing applications, namely, the contains, contained and the overlap relations. We prove a lower bound on the storage required to answer queries about the contains relation accurately at a given resolution. We then present three storage-efficient approximation algorithms which we believe to be the first to estimate query results about these spatial relations. We evaluate these algorithms with both synthetic and real world datasets and show that they provide highly accurate estimates for datasets with various characteristics. Recommended by: Sunil Prabhakar Work supported by NSF grants IIS 02-23022 and CNF 04-23336. An earlier version of this paper appeared in the 17th International Conference on Data Engineering (ICDE 2001).  相似文献   
2.
In many distributed databases locality of reference is crucial to achieve acceptable performance. However, the purpose of data distribution is to spread the data among several remote sites. One way to solve this contradiction is to use partitioned data techniques. Instead of accessing the entire data, a site works on a fraction that is made locally available, thereby increasing the site's autonomy. We present a theory of partitioned data that formalizes the concept and establishes the basis to develop a correctness criterion and a concurrency control protocol for partitioned databases. Set-serializability is proposed as a correctness criterion and we suggest an implementation that integrates partitioned and non-partitioned data. To complete this study, the policies required in a real implementation are also analyzed. Recommended by: Hector Garcia-Molina  相似文献   
3.

Context

Several metrics have been proposed to measure the extent to which class members are related. Connectivity-based class cohesion metrics measure the degree of connectivity among the class members.

Objective

We propose a new class cohesion metric that has higher discriminative power than any of the existing cohesion metrics. In addition, we empirically compare the connectivity and non-connectivity-based cohesion metrics.

Method

The proposed class cohesion metric is based on counting the number of possible paths in a graph that represents the connectivity pattern of the class members. We theoretically and empirically validate this path connectivity class cohesion (PCCC) metric. The empirical validation compares seven connectivity-based metrics, including PCCC, and 11 non-connectivity-based metrics in terms of discriminative and fault detection powers. The discriminative-power study explores the probability that a cohesion metric will incorrectly determine classes to be cohesively equal when they have different connectivity patterns. The fault detection study investigates whether connectivity-based metrics, including PCCC, better explain the presence of faults from a statistical standpoint in comparison to other non-connectivity-based cohesion metrics, considered individually or in combination.

Results

The theoretical validation demonstrates that PCCC satisfies the key cohesion properties. The results of the empirical studies indicate that, in contrast to other connectivity-based cohesion metrics, PCCC is much better than any comparable cohesion metric in terms of its discriminative power. In addition, the results also indicate that PCCC measures cohesion aspects that are not captured by other metrics, wherein it is considerably better than other connectivity-based metrics but slightly worse than some other non-connectivity-based cohesion metrics in terms of its ability to predict faulty classes.

Conclusion

PCCC is more useful in practice for the applications in which practitioners need to distinguish between the quality of different classes or the quality of different implementations of the same class.  相似文献   
4.
A technique for scheduling and processor allocation leading to the synthesis of integrated heterogeneous pipelined processing elements, implementing digital signal processing applications, is proposed. The proposed technique achieves efficient hardware implementations at the logic-level by minimizing the number of processing units used, without compromising the rate and delay optimality criteria.

The proposed algorithm is found to outperform algorithms resulting in homogeneous implementations, as it gives schedules with lower iteration periods, requires less hardware resources, and has lower time complexity at design time. In comparison with the already existing heterogeneous algorithms, the proposed algorithm produces schedules of lower time complexity and lower iteration period for some applications. The optimal performance of the proposed algorithm has been verified on several benchmarks.  相似文献   

5.
6.
7.
In the present work, the laminar premixed acetylene–hydrogen–air and ethanol–hydrogen–air flames were investigated numerically. Laminar flame speeds, the adiabatic flame temperatures were obtained utilizing CHEMKIN PREMIX and EQUI codes, respectively. Sensitivity analysis was performed and flame structure was analyzed. The results show that for acetylene–hydrogen–air flames, combustion is promoted by H and O radicals. The highest flame speed (247 cm/s) was obtained in mixture with 95% H2–5% C2H2 at λ = 1.0. The region between 0.95 < XH2 < 1.0 was referred to as the acetylene-accelerating hydrogen combustion since the flame speed increases with increase the acetylene fraction in the mixture. Further increase in the acetylene fraction decreases the H radicals in the flame front. In ethanol–hydrogen–air mixtures, the mixture reactivity is determined by H, OH and O radicals. For XH2 < 0.6, the flame speed in this regime increases linearly with increasing the hydrogen fraction. For XH2 > 0.8, the hydrogen chemistry control the combustion and ethanol addition inhibits the reactivity and reduces linearly the laminar flame speed. For 0.6 < XH2 < 0.8, the laminar flame speed increases exponentially with the increase of hydrogen fraction.  相似文献   
8.
Spatial database operations are typically performed in two steps. In the filtering step, indexes and the minimum bounding rectangles (MBRs) of the objects are used to quickly determine a set of candidate objects. In the refinement step, the actual geometries of the objects are retrieved and compared to the query geometry or each other. Because of the complexity of the computational geometry algorithms involved, the CPU cost of the refinement step is usually the dominant cost of the operation for complex geometries such as polygons. Although many run-time and pre-processing-based heuristics have been proposed to alleviate this problem, the CPU cost still remains the bottleneck. In this paper, we propose a novel approach to address this problem using the efficient rendering and searching capabilities of modern graphics hardware. This approach does not require expensive pre-processing of the data or changes to existing storage and index structures, and is applicable to both intersection and distance predicates. We evaluate this approach by comparing the performance with leading software solutions. The results show that by combining hardware and software methods, the overall computational cost can be reduced substantially for both spatial selections and joins. We integrated this hardware/software co-processing technique into a popular database to evaluate its performance in the presence of indexes, pre-processing and other proprietary optimizations. Extensive experimentation with real-world data sets show that the hardware-accelerated technique not only outperforms the run-time software solutions but also performs as well if not better than pre-processing-assisted techniques.  相似文献   
9.
10.
Glycerol removal from biodiesel using membrane separation technology   总被引:1,自引:0,他引:1  
Jehad Saleh  Marc A. Dubé 《Fuel》2010,89(9):2260-461
Membrane separation technology was used to remove free glycerol from biodiesel in order to meet the ASTM D6751 and EN 14214 standards. Fatty acid methyl esters (FAME) produced from canola oil and methanol were purified using ultra-filtration. The effect of different materials present in the transesterification reaction, such as water, soap, and methanol, on the final free glycerol separation was studied. A modified polyacrylonitrile (PAN) membrane, with 100 kD molecular weight cut-off was used in all runs. Tests were performed at 25 °C and 552 kPa operating pressure. The free glycerol content in the feed, retentate and permeate of the membrane system was analyzed using gas chromatography according to ASTM D6584. Results showed low concentrations of water had a considerable effect in removing glycerol from the FAME even at approx. 0.08 mass%. This is four orders of magnitude less than the amount of water required in a conventional biodiesel purification process using water washing. It is suggested that the mechanism of separation of free glycerol from FAME was due to the removal of an ultrafine dispersed glycerol-rich phase present in the untreated FAME. This was confirmed by the presence of particulates in the untreated FAME. The size of the particles and the free glycerol separation both increased with increasing water content of the FAME. The trends of separation and particle size vs. water content in the FAME phase were very similar and exhibited a sudden increase at 0.08 mass% water in the untreated FAME. This supports the conclusion that water increased the size of the distributed glycerol phase in the untreated FAME leading to its separation by the ultra-filtration membrane. The technology for the removal of free glycerol from biodiesel was found to use 2.0 g of water per L of treated FAME (0.225 mass% water) vs. the current 10 L of water per L of treated FAME.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号