首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   92篇
  免费   6篇
电工技术   1篇
化学工业   24篇
金属工艺   1篇
机械仪表   1篇
建筑科学   1篇
轻工业   2篇
石油天然气   2篇
无线电   4篇
一般工业技术   27篇
冶金工业   5篇
自动化技术   30篇
  2023年   1篇
  2022年   2篇
  2021年   6篇
  2020年   1篇
  2019年   2篇
  2018年   1篇
  2017年   1篇
  2016年   8篇
  2015年   5篇
  2014年   10篇
  2013年   6篇
  2012年   10篇
  2010年   8篇
  2009年   4篇
  2008年   2篇
  2007年   4篇
  2006年   6篇
  2005年   3篇
  2004年   6篇
  2003年   1篇
  2002年   1篇
  2001年   2篇
  2000年   2篇
  1999年   2篇
  1986年   1篇
  1984年   2篇
  1968年   1篇
排序方式: 共有98条查询结果,搜索用时 17 毫秒
1.
Computational complexity of queries based on itemsets   总被引:1,自引:0,他引:1  
We investigate determining the exact bounds of the frequencies of conjunctions based on frequent sets. Our scenario is an important special case of some general probabilistic logic problems that are known to be intractable. We show that despite the limitations our problems are also intractable, namely, we show that checking whether the maximal consistent frequency of a query is larger than a given threshold is NP-complete and that evaluating the Maximum Entropy estimate of a query is PP-hard. We also prove that checking consistency is NP-complete.  相似文献   
2.
The nanometer scale topography of self‐assembling structural protein complexes in animals is believed to induce favorable cell responses. An important example of such nanostructured biological complexes is fibrillar collagen that possesses a cross‐striation structure with a periodicity of 69 nm and a peak‐to‐valley distance of 4–6 nm. Bovine collagen type I was assembled into fibrillar structures in vitro and sedimented onto solid supports. Their structural motif was transferred into a nickel replica by physical vapor deposition of a small‐grained metal layer followed by galvanic plating. The resulting inverted nickel structure was found to faithfully present most of the micrometer and nanometer scale topography of the biological original. This nickel replica was used as a die for the injection molding of a range of different thermoplastic polymers. Total injection molding cycle times were in the range of 30–45 seconds. One of the polymer materials investigated, polyethylene, displayed poor replication of the biological nanotopographical motif. However, the majority of the polymers showed very high replication fidelity as witnessed by their ability to replicate the cross‐striation features of less than 5 nm height difference. The latter group of materials includes poly(propylene), poly(methyl methacrylate), poly(L ‐lactic acid), polycaprolactone, and a copolymer of cyclic and linear olefins (COC). This work suggests that the current limiting factor for the injection molding of nanometer scale topography in thermoplastic polymers lies with the grain size of the initial metal coating of the mold rather than the polymers themselves.

  相似文献   

3.
Chiral compounds can be produced efficiently by using biocatalysts. However, wild-type enzymes often do not meet the requirements of a production process, making optimization by rational design or directed evolution necessary. Here, we studied the lipase-catalyzed hydrolysis of the model substrate 1-(2-naphthyl)ethyl acetate both theoretically and experimentally. We found that a computational equivalent of alanine scanning mutagenesis based on QM/MM methodology can be applied to identify amino acid positions important for the activity of the enzyme. The theoretical results are consistent with concomitant experimental work using complete saturation mutagenesis and high-throughput screening of the target biocatalyst, a lipase from Bacillus subtilis. Both QM/MM-based calculations and molecular biology experiments identify histidine 76 as a residue that strongly affects the catalytic activity. The experiments demonstrate its important influence on enantioselectivity.  相似文献   
4.
Discovering the most interesting patterns is the key problem in the field of pattern mining. While ranking or selecting patterns is well-studied for itemsets it is surprisingly under-researched for other, more complex, pattern types. In this paper we propose a new quality measure for episodes. An episode is essentially a set of events with possible restrictions on the order of events. We say that an episode is significant if its occurrence is abnormally compact, that is, only few gap events occur between the actual episode events, when compared to the expected length according to the independence model. We can apply this measure as a post-pruning step by first discovering frequent episodes and then rank them according to this measure. In order to compute the score we will need to compute the mean and the variance according to the independence model. As a main technical contribution we introduce a technique that allows us to compute these values. Such a task is surprisingly complex and in order to solve it we develop intricate finite state machines that allow us to compute the needed statistics. We also show that asymptotically our score can be interpreted as a $P$ value. In our experiments we demonstrate that despite its intricacy our ranking is fast: we can rank tens of thousands episodes in seconds. Our experiments with text data demonstrate that our measure ranks interpretable episodes high.  相似文献   
5.
Finding dense subgraphs is an important problem in graph mining and has many practical applications. At the same time, while large real-world networks are known to have many communities that are not well-separated, the majority of the existing work focuses on the problem of finding a single densest subgraph. Hence, it is natural to consider the question of finding the top-k densest subgraphs. One major challenge in addressing this question is how to handle overlaps: eliminating overlaps completely is one option, but this may lead to extracting subgraphs not as dense as it would be possible by allowing a limited amount of overlap. Furthermore, overlaps are desirable as in most real-world graphs there are vertices that belong to more than one community, and thus, to more than one densest subgraph. In this paper we study the problem of finding top-k overlapping densest subgraphs, and we present a new approach that improves over the existing techniques, both in theory and practice. First, we reformulate the problem definition in a way that we are able to obtain an algorithm with constant-factor approximation guarantee. Our approach relies on using techniques for solving the max-sum diversification problem, which however, we need to extend in order to make them applicable to our setting. Second, we evaluate our algorithm on a collection of benchmark datasets and show that it convincingly outperforms the previous methods, both in terms of quality and efficiency.  相似文献   
6.
Using background knowledge to rank itemsets   总被引:1,自引:1,他引:0  
Assessing the quality of discovered results is an important open problem in data mining. Such assessment is particularly vital when mining itemsets, since commonly many of the discovered patterns can be easily explained by background knowledge. The simplest approach to screen uninteresting patterns is to compare the observed frequency against the independence model. Since the parameters for the independence model are the column margins, we can view such screening as a way of using the column margins as background knowledge. In this paper we study techniques for more flexible approaches for infusing background knowledge. Namely, we show that we can efficiently use additional knowledge such as row margins, lazarus counts, and bounds of ones. We demonstrate that these statistics describe forms of data that occur in practice and have been studied in data mining. To infuse the information efficiently we use a maximum entropy approach. In its general setting, solving a maximum entropy model is infeasible, but we demonstrate that for our setting it can be solved in polynomial time. Experiments show that more sophisticated models fit the data better and that using more information improves the frequency prediction of itemsets.  相似文献   
7.
Recent advances in data-acquisition technologies have equipped team coaches and sports analysts with the capability of collecting and analyzing detailed data of team activity in the field. It is now possible to monitor a sports event and record information regarding the position of the players in the field, passing the ball, coordinated moves, and so on. In this paper we propose a new method to analyze such team activity data. Our goal is to segment the overall activity stream into a sequence of potentially recurrent modes, which reflect different strategies adopted by a team, and thus, help to analyze and understand team tactics. We model team activity data as a temporal network, that is, a sequence of time-stamped edges that capture interactions between players. We then formulate the problem of identifying a small number of team modes and segmenting the overall timespan so that each segment can be mapped to one of the team modes; hence the set of modes summarizes the overall team activity. We prove that the resulting optimization problem is \(\mathrm {NP}\)-hard, and we discuss its properties. We then present a number of different algorithms for solving the problem, including an approximation algorithm that is practical only for one mode, as well as heuristic methods based on iterative and greedy approaches. We benchmark the performance of our algorithms on real and synthetic datasets. Of all methods, the iterative algorithm provides the best combination of performance and running time. We demonstrate practical examples of the insights provided by our algorithms when mining real sports-activity data. In addition, we show the applicability of our algorithms on other types of data, such as social networks.  相似文献   
8.
We review a number of formal verification techniques supported by STeP, the Stanford Temporal Prover, describing how the tool can be used to verify properties of several versions of the Bakery Mutual exclusion algorithm for mutual exclusion. We verify the classic two-process algorithm and simple variants, as well as an atomic parameterized version. The methods used include deductive verification rules, verification diagrams, automatic invariant generation, and finite-state model checking and abstraction.  相似文献   
9.
Green sand moulding machines for cast iron foundries are presently unable to uniquely identify individual castings. An insert tool concept is developed and tested via incremental mock-up development. The tool is part of the pattern plate and changes shape between each moulding, thus giving each mould a unique ID by embossing a Data Matrix symbol into the sand. In the process of producing the mould, each casting can be given a unique (DPM), enabling part tracking throughout the casting's life cycle. Sand embossing is achieved with paraffin-actuated reconfigurable pin-type tooling under simulated processing conditions. The marker geometry limitations have been tested using static symbol patterns, both for sand embossing and actual casting marking. The marked castings have successfully been identified with decoding software. The study shows that the function of each element of this technology can be successfully applied within the foundry industry.  相似文献   
10.
The focus of this article is US military research in Greenland and its role in Danish‐American political relations in the early Cold War period 1945–1968. This was a period of intense US research activity that aimed to overcome the hostile Greenlandic environment and harness it for military purposes. In the US‐Danish defense agreement on Greenland of 1951 the USA got a free hand to develop three so‐called defense areas for military purposes, while it had to seek Danish permission for research and other activities outside these areas. The two partners had differing, but mainly compatible, interests in this process. The US interest was freedom to do research on the gigantic Greenland Icecap, while the Danish authorities emphasized the protection of its sovereignty over Greenland. The article follows the US research programs in the 1950s and 1960s and Danish responses in some detail, including the intriguing and still mysterious Camp Century project and its relationship with the US Army's Iceworm plan to deploy strategic missiles beneath the surface of the Greenland Icecap.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号