全文获取类型
收费全文 | 134篇 |
免费 | 9篇 |
专业分类
综合类 | 1篇 |
化学工业 | 44篇 |
金属工艺 | 4篇 |
机械仪表 | 5篇 |
矿业工程 | 1篇 |
能源动力 | 17篇 |
轻工业 | 5篇 |
无线电 | 12篇 |
一般工业技术 | 19篇 |
冶金工业 | 1篇 |
自动化技术 | 34篇 |
出版年
2024年 | 2篇 |
2023年 | 2篇 |
2022年 | 2篇 |
2021年 | 5篇 |
2020年 | 3篇 |
2019年 | 6篇 |
2018年 | 7篇 |
2017年 | 8篇 |
2016年 | 5篇 |
2015年 | 2篇 |
2014年 | 9篇 |
2013年 | 15篇 |
2012年 | 15篇 |
2011年 | 10篇 |
2010年 | 3篇 |
2009年 | 7篇 |
2008年 | 8篇 |
2007年 | 8篇 |
2006年 | 6篇 |
2005年 | 5篇 |
2004年 | 7篇 |
2003年 | 3篇 |
2002年 | 3篇 |
1997年 | 1篇 |
1983年 | 1篇 |
排序方式: 共有143条查询结果,搜索用时 421 毫秒
101.
Amol Sharma Jackie Range Vibhuti Agarwal 《中国计算机用户》2008,(21):26-27
让越来越多的电子设备来参与汽车操控已经是大势所趋。F1汽车赛不仅是一项汽车高科技的竞赛,更兼有将未来的汽车技术展示给人们的效果。翻阅历史资料,可以知道在上世纪五六十年代,汽车安全带并不是车 相似文献
102.
Pawan D. Meshram Ravindra G. Puri Amol L. Patil Vikas V. Gite 《Journal of Coatings Technology and Research》2013,10(3):331-338
In this investigation, polyetheramide resin was prepared through the condensation polymerization of N,N-bis (2-hydroxyethyl) cottonseed oil fatty amide (HECOFA) with bisphenol-A. It was further modified by 2,4-toluene diisocyanate (TDI) in 10–30 wt% of polyetheramide to develop a series of moisture curing urethane-modified polyetheramide resins (UMCOPEtA). The synthesized resin was characterized using 1H NMR, 13C NMR, FTIR and solubility in various organic solvents at room temperature. The thermal and curing behavior of the resin was investigated using thermogravimetric analysis and differential scanning calorimetric techniques. The physico-chemical properties such as hydroxyl value, iodine value, specific gravity and mechanical properties like scratch hardness, impact, and flexibility were determined by standard laboratory methods. Coatings of UMCOPEtA resin were prepared on mild steel panels to evaluate chemical resistance performance against acid, alkali, water and xylene. The newly developed UMCOPEtA coatings showed improved hardness, impact, gloss, water and chemical resistance when compared with unmodified polyetheramide coatings, and thus were found to be suitable as a high performance coating material. 相似文献
103.
Amol Ghoting Gregory Buehrer Srinivasan Parthasarathy Daehyun Kim Anthony Nguyen Yen-Kuang Chen Pradeep Dubey 《The VLDB Journal The International Journal on Very Large Data Bases》2007,16(1):77-96
Algorithms are typically designed to exploit the current state of the art in processor technology. However, as processor technology evolves, said algorithms are often unable to derive the maximum achievable performance on these modern architectures. In this paper, we examine the performance of frequent pattern mining algorithms on a modern processor. A detailed performance study reveals that even the best frequent pattern mining implementations, with highly efficient memory managers, still grossly under-utilize a modern processor. The primary performance bottlenecks are poor data locality and low instruction level parallelism (ILP). We propose a cache-conscious prefix tree to address this problem. The resulting tree improves spatial locality and also enhances the benefits from hardware cache line prefetching. Furthermore, the design of this data structure allows the use of path tiling, a novel tiling strategy, to improve temporal locality. The result is an overall speedup of up to 3.2 when compared with state of the art implementations. We then show how these algorithms can be improved further by realizing a non-naive thread-based decomposition that targets simultaneously multi-threaded processors (SMT). A key aspect of this decomposition is to ensure cache re-use between threads that are co-scheduled at a fine granularity. This optimization affords an additional speedup of 50%, resulting in an overall speedup of up to 4.8. The proposed optimizations also provide performance improvements on SMPs, and will most likely be beneficial on emerging processors. 相似文献
104.
Amol Dattatraya Mali 《Computational Intelligence》2002,18(3):386-419
Recently, casting planning as propositional satisfiability (SAT) has been shown to be an efficient technique of plan synthesis. This article is a response to the recently proposed challenge of developing novel propositional encodings that are based on a combination of different types of plan refinements and characterizing the tradeoffs. We refer to these encodings as hybrid encodings. An investigation of these encodings is important, because this can give insights into what kinds of planning problems can be solved faster with hybrid encodings.
Encodings based on partial–order planning and state–space planning have been reported in previous research. We propose a new type of encoding called a unifying encoding that subsumes these two encodings. We also report on several other hybrid encodings. Next, we show how the satisfiability framework can be extended to incremental planning. State–space encoding is attractive because of its lower size and causal encoding is attractive because of its highest flexibility in reordering steps. We show that hybrid encodings have a higher size and a lower flexibility in step reordering and, thus, do not combine the best of these encodings. We discuss in detail several specific planning scenarios where hybrid encodings are likely to be superior to nonhybrid encodings. 相似文献
Encodings based on partial–order planning and state–space planning have been reported in previous research. We propose a new type of encoding called a unifying encoding that subsumes these two encodings. We also report on several other hybrid encodings. Next, we show how the satisfiability framework can be extended to incremental planning. State–space encoding is attractive because of its lower size and causal encoding is attractive because of its highest flexibility in reordering steps. We show that hybrid encodings have a higher size and a lower flexibility in step reordering and, thus, do not combine the best of these encodings. We discuss in detail several specific planning scenarios where hybrid encodings are likely to be superior to nonhybrid encodings. 相似文献
105.
A unified approach to ranking in probabilistic databases 总被引:1,自引:0,他引:1
Jian Li Barna Saha Amol Deshpande 《The VLDB Journal The International Journal on Very Large Data Bases》2011,20(2):249-275
Ranking is a fundamental operation in data analysis and decision support and plays an even more crucial role if the dataset being explored exhibits uncertainty. This has led to much work in understanding how to rank the tuples in a probabilistic dataset in recent years. In this article, we present a unified approach to ranking and top-k query processing in probabilistic databases by viewing it as a multi-criterion optimization problem and by deriving a set of features that capture the key properties of a probabilistic dataset that dictate the ranked result. We contend that a single, specific ranking function may not suffice for probabilistic databases, and we instead propose two parameterized ranking functions, called PRF ω and PRF e, that generalize or can approximate many of the previously proposed ranking functions. We present novel generating functions-based algorithms for efficiently ranking large datasets according to these ranking functions, even if the datasets exhibit complex correlations modeled using probabilistic and/xor trees or Markov networks. We further propose that the parameters of the ranking function be learned from user preferences, and we develop an approach to learn those parameters. Finally, we present a comprehensive experimental study that illustrates the effectiveness of our parameterized ranking functions, especially PRF e, at approximating other ranking functions and the scalability of our proposed algorithms for exact or approximate ranking. 相似文献
106.
Abdul Quamar Amol Deshpande Jimmy Lin 《The VLDB Journal The International Journal on Very Large Data Bases》2016,25(2):125-150
There is an increasing interest in executing complex analyses over large graphs, many of which require processing a large number of multi-hop neighborhoods or subgraphs. Examples include ego network analysis, motif counting, finding social circles, personalized recommendations, link prediction, anomaly detection, analyzing influence cascades, and others. These tasks are not well served by existing vertex-centric graph processing frameworks, where user programs are only able to directly access the state of a single vertex at a time, resulting in high communication, scheduling, and memory overheads in executing such tasks. Further, most existing graph processing frameworks ignore the challenges in extracting the relevant portions of the graph that an analysis task is interested in, and loading those onto distributed memory. This paper introduces NScale, a novel end-to-end graph processing framework that enables the distributed execution of complex subgraph-centric analytics over large-scale graphs in the cloud. NScale enables users to write programs at the level of subgraphs rather than at the level of vertices. Unlike most previous graph processing frameworks, which apply the user program to the entire graph, NScale allows users to declaratively specify subgraphs of interest. Our framework includes a novel graph extraction and packing (GEP) module that utilizes a cost-based optimizer to partition and pack the subgraphs of interest into memory on as few machines as possible. The distributed execution engine then takes over and runs the user program in parallel on those subgraphs, restricting the scope of the execution appropriately, and utilizes novel techniques to minimize memory consumption by exploiting overlaps among the subgraphs. We present a comprehensive empirical evaluation comparing against three state-of-the-art systems, namely Giraph, GraphLab, and GraphX, on several real-world datasets and a variety of analysis tasks. Our experimental results show orders-of-magnitude improvements in performance and drastic reductions in the cost of analytics compared to vertex-centric approaches. 相似文献
107.
We study a set of problems related to efficient battery energy utilization for monitoring applications in a wireless sensor
network with the goal to increase the sensor network lifetime. We study several generalizations of a basic problem called
Set k-Cover. The problem can be described as follows: we are given a set of sensors, and a set of targets to be monitored. Each
target can be monitored by a subset of the sensors. To increase the lifetime of the sensor network, we would like to partition
the sensors into k sets (or time-slots), and activate each set of sensors in a different time-slot, thus extending the battery life of the sensors
by a factor of k. The goal is to find a partitioning that maximizes the total coverage of the targets for a given k. This problem is known to be NP-hard. We develop an improved approximation algorithm for this problem using a reduction to Max k-Cut. Moreover, we are able to demonstrate that this algorithm is efficient, and yields almost optimal solutions in practice. 相似文献
108.
There is an increasing interest in solving temporal planning problems. Identification and propagation of mutual exclusion
relations between actions can significantly enhance the efficiency of a planner. Current definitions of mutually exclusive
actions severely restrict their concurrency. In this paper, we report on thirteen groups of permanently mutually exclusive
PDDL 2.1, Level 3 actions. We report on sixteen types of potentially-conflicting interactions between two actions where concurrency
may be maximized by adjusting starting time of one of the two actions. We discuss several examples where actions can overlap
despite conflicting preconditions and/or effects. The processes executing these actions are mostly independent. We report
on a new domain-rewriting technique called “baiting” in order to improve the concurrency in temporal plans. Baiting actions
lure a temporal planner into improving concurrency. The technique involves splitting user-identified operators. We report
on three types of baiting (standard, double and nested) and show their suitability for various types of action interactions.
Baiting requires minimal modification to the planning code. Baiting does not increase the branching in search trees. Baiting
does not affect the soundness and completeness of a temporal planner. Our empirical evaluation shows that the makespans of
plans generated by efficient planner Sapa with baited domain are significantly lower than makespans of plans generated without
baiting. 相似文献
109.
Prithviraj Sen Amol Deshpande Lise Getoor 《The VLDB Journal The International Journal on Very Large Data Bases》2009,18(5):1065-1090
Due to numerous applications producing noisy data, e.g., sensor data, experimental data, data from uncurated sources, information
extraction, etc., there has been a surge of interest in the development of probabilistic databases. Most probabilistic database
models proposed to date, however, fail to meet the challenges of real-world applications on two counts: (1) they often restrict
the kinds of uncertainty that the user can represent; and (2) the query processing algorithms often cannot scale up to the
needs of the application. In this work, we define a probabilistic database model, PrDB, that uses graphical models, a state-of-the-art probabilistic modeling technique developed within the statistics and machine
learning community, to model uncertain data. We show how this results in a rich, complex yet compact probabilistic database
model, which can capture the commonly occurring uncertainty models (tuple uncertainty, attribute uncertainty), more complex
models (correlated tuples and attributes) and allows compact representation (shared and schema-level correlations). In addition,
we show how query evaluation in PrDB translates into inference in an appropriately augmented graphical model. This allows us to easily use any of a myriad of
exact and approximate inference algorithms developed within the graphical modeling community. While probabilistic inference
provides a generic approach to solving queries, we show how the use of shared correlations, together with a novel inference
algorithm that we developed based on bisimulation, can speed query processing significantly. We present a comprehensive experimental
evaluation of the proposed techniques and show that even with a few shared correlations, significant speedups are possible. 相似文献
110.
A novel method of patterning surfaces with synthetic or biological polymers is demonstrated. It consists of using microcontact printing to pattern a gold surface with an adsorbate that imparts hydrophilicity; the remainder of the surface is covered with one that imparts hydrophobicity. 16-Mercaptohexadecanoic acid (MHDA) and 1H,1H,2H,2H-perfluorodecanethiol, respectively, have been used as the hydrophilic and hydrophobic adsorbates. This functionalized gold surface then serves as a template for patterning hydrophilic polymers and biomaterials, which are either spin-coated or drop-cast onto the surface. Using this methodology, it is shown by atomic force microscopy, scanning electron microscopy (SEM), and fluorescence microscopy that micron-scale patterns of a poly(ethylene)-block-poly(ethylene oxide) copolymer, poly-L-tryptophan, and bovine collagen can be fabricated, with these mimicking the MHDA patterns. For the block copolymer, it is found by atomic force microscopy that the heights of the polymer patterns decrease as their widths decrease. This is believed to be due to the inherent instability of tall, narrow polymer structures and the tendency of the polymer to minimize its exposed surface area. For poly-L-tryptophan, two different molecular weights of this polyamino acid have been studied, and different morphologies within the patterned regions are observed. While oligomeric poly-L-tryptophan (1,000-5,000 g/mol) gives smooth MHDA-covered patterns, the higher molecular weight (15,000-50,000 g/mol) yields fibrous ones. 相似文献