首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   224篇
  免费   9篇
电工技术   5篇
化学工业   70篇
金属工艺   4篇
机械仪表   4篇
建筑科学   4篇
矿业工程   1篇
轻工业   9篇
石油天然气   1篇
无线电   17篇
一般工业技术   27篇
冶金工业   8篇
原子能技术   1篇
自动化技术   82篇
  2024年   1篇
  2023年   4篇
  2022年   9篇
  2021年   17篇
  2020年   4篇
  2019年   4篇
  2018年   5篇
  2017年   6篇
  2016年   9篇
  2015年   4篇
  2014年   15篇
  2013年   17篇
  2012年   10篇
  2011年   21篇
  2010年   11篇
  2009年   14篇
  2008年   13篇
  2007年   13篇
  2006年   11篇
  2005年   3篇
  2003年   3篇
  2002年   3篇
  2001年   3篇
  1999年   2篇
  1998年   3篇
  1997年   2篇
  1996年   3篇
  1995年   2篇
  1994年   6篇
  1993年   2篇
  1991年   1篇
  1988年   1篇
  1987年   1篇
  1986年   1篇
  1982年   1篇
  1980年   3篇
  1978年   1篇
  1972年   1篇
  1966年   2篇
  1964年   1篇
排序方式: 共有233条查询结果,搜索用时 15 毫秒
101.
An attempt is made to establish the most proper initial structure for the obtaining of highly drawn (at room temperature) nylons and polyesters. It is found that neither the completely crystalline structure nor the fully amorphous one but a low-crystallinity, very imperfect structure (with a large number of small crystallites) is the most suitable one. Through ultraquenching of nylon 6 and PBT melts, and storage at ambient conditions, draw ratios of 7 and 8, respectively, are achieved. Subsequent annealing results in the obtaining of doubly higher values of the elasticity modulus and tensile stress. By means of WAXS, SAXS, IR spectroscopy, and DSC measurements the obtained structure is characterized. The highly drawn and annealed samples show a strongly reduced concentration of chain folds and a lack of long spacing.  相似文献   
102.
Matrix-Matrix Multiplication (MMM) is a highly important kernel in linear algebra algorithms and the performance of its implementations depends on the memory utilization and data locality. There are MMM algorithms, such as standard, Strassen–Winograd variant, and many recursive array layouts, such as Z-Morton or U-Morton. However, their data locality is lower than that of the proposed methodology. Moreover, several SOA (state of the art) self-tuning libraries exist, such as ATLAS for MMM algorithm, which tests many MMM implementations. During the installation of ATLAS, on the one hand an extremely complex empirical tuning step is required, and on the other hand a large number of compiler options are used, both of which are not included in the scope of this paper. In this paper, a new methodology using the standard MMM algorithm is presented, achieving improved performance by focusing on data locality (both temporal and spatial). This methodology finds the scheduling which conforms with the optimum memory management. Compared with (Chatterjee et al. in IEEE Trans. Parallel Distrib. Syst. 13:1105, 2002; Li and Garzaran in Proc. of Lang. Compil. Parallel Comput., 2005; Bilmes et al. in Proc. of the 11th ACM Int. Conf. Super-comput., 1997; Aberdeen and Baxter in Concurr. Comput. Pract. Exp. 13:103, 2001), the proposed methodology has two major advantages. Firstly, the scheduling used for the tile level is different from the element level’s one, having better data locality, suited to the sizes of memory hierarchy. Secondly, its exploration time is short, because it searches only for the number of the level of tiling used, and between (1, 2) (Sect. 4) for finding the best tile size for each cache level. A software tool (C-code) implementing the above methodology was developed, having the hardware model and the matrix sizes as input. This methodology has better performance against others at a wide range of architectures. Compared with the best existing related work, which we implemented, better performance up to 55% than the Standard MMM algorithm and up to 35% than Strassen’s is observed, both under recursive data array layouts.  相似文献   
103.
The digitalization process and its outcomes in the 21st century accelerate transformation and the creation of sustainable societies. Our decisions, actions and even existence in the digital world generate data, which offer tremendous opportunities for revising current business methods and practices, thus there is a critical need for novel theories embracing big data analytics ecosystems. Building upon the rapidly developing research on digital technologies and the strengths that information systems discipline brings in the area, we conceptualize big data and business analytics ecosystems and propose a model that portraits how big data and business analytics ecosystems can pave the way towards digital transformation and sustainable societies, that is the Digital Transformation and Sustainability (DTS) model. This editorial discusses that in order to reach digital transformation and the creation of sustainable societies, first, none of the actors in the society can be seen in isolation, instead we need to improve our understanding of their interactions and interrelations that lead to knowledge, innovation, and value creation. Second, we gain deeper insight on which capabilities need to be developed to harness the potential of big data analytics. Our suggestions in this paper, coupled with the five research contributions included in the special issue, seek to offer a broader foundation for paving the way towards digital transformation and sustainable societies  相似文献   
104.
For any given system the number and location of sensors can affect the closed-loop performance as well as the reliability of the system. Hence, one problem in control system design is the selection of the sensors in some optimum sense that considers both the system performance and reliability. Although some methods have been proposed that deal with some of the aforementioned aspects, in this work, a design framework dealing with both control and reliability aspects is presented. The proposed framework is able to identify the best sensor set for which optimum performance is achieved even under single or multiple sensor failures with minimum sensor redundancy. The proposed systematic framework combines linear quadratic Gaussian control, fault tolerant control and multiobjective optimisation. The efficacy of the proposed framework is shown via appropriate simulations on an electro-magnetic suspension system.  相似文献   
105.
The foreign body reactions are commonly referred to the network of immune and inflammatory reactions of human or animals to foreign objects placed in tissues. They are basic biological processes, and are also highly relevant to bioengineering applications in implants, as fibrotic tissue formations surrounding medical implants have been found to substantially reduce the effectiveness of devices. Despite of intensive research on determining the mechanisms governing such complex responses, few mechanistic mathematical models have been developed to study such foreign body reactions. This study focuses on a kinetics-based predictive tool in order to analyze outcomes of multiple interactive complex reactions of various cells/proteins and biochemical processes and to understand transient behavior during the entire period (up to several months). A computational model in two spatial dimensions is constructed to investigate the time dynamics as well as spatial variation of foreign body reaction kinetics. The simulation results have been consistent with experimental data and the model can facilitate quantitative insights for study of foreign body reaction process in general.  相似文献   
106.
We investigate the following problem: given a set of jobs and a set of people with preferences over the jobs, what is the optimal way of matching people to jobs? Here we consider the notion of popularity. A matching M is popular if there is no matching M′ such that more people prefer M′ to M than the other way around. Determining whether a given instance admits a popular matching and, if so, finding one, was studied by Abraham et al. (SIAM J. Comput. 37(4):1030–1045, 2007). If there is no popular matching, a reasonable substitute is a matching whose unpopularity is bounded. We consider two measures of unpopularity—unpopularity factor denoted by u(M) and unpopularity margin denoted by g(M). McCutchen recently showed that computing a matching M with the minimum value of u(M) or g(M) is NP-hard, and that if G does not admit a popular matching, then we have u(M)≥2 for all matchings M in G.Here we show that a matching M that achieves u(M)=2 can be computed in \(O(m\sqrt{n})\) time (where m is the number of edges in G and n is the number of nodes) provided a certain graph H admits a matching that matches all people. We also describe a sequence of graphs: H=H 2,H 3,…,H k such that if H k admits a matching that matches all people, then we can compute in \(O(km\sqrt{n})\) time a matching M such that u(M)≤k?1 and \(g(M)\le n(1-\frac{2}{k})\). Simulation results suggest that our algorithm finds a matching with low unpopularity in random instances.  相似文献   
107.
Robust control of a class of uncertain systems that have disturbances and uncertainties not satisfying “matching” condition is investigated in this paper via a disturbance observer based control (DOBC) approach. In the context of this paper, “matched” disturbances/uncertainties stand for the disturbances/uncertainties entering the system through the same channels as control inputs. By properly designing a disturbance compensation gain, a novel composite controller is proposed to counteract the “mismatched” lumped disturbances from the output channels. The proposed method significantly extends the applicability of the DOBC methods. Rigorous stability analysis of the closed-loop system with the proposed method is established under mild assumptions. The proposed method is applied to a nonlinear MAGnetic LEViation (MAGLEV) suspension system. Simulation shows that compared to the widely used integral control method, the proposed method provides significantly improved disturbance rejection and robustness against load variation.  相似文献   
108.
The traditional approach to computational problem solving is to use one of the available algorithms to obtain solutions for all given instances of a problem. However, typically not all instances are the same, nor a single algorithm performs best on all instances. Our work investigates a more sophisticated approach to problem solving, called Recursive Algorithm Selection, whereby several algorithms for a problem (including some recursive ones) are available to an agent that makes an informed decision on which algorithm to select for handling each sub-instance of a problem at each recursive call made while solving an instance. Reinforcement learning methods are used for learning decision policies that optimize any given performance criterion (time, memory, or a combination thereof) from actual execution and profiling experience. This paper focuses on the well-known problem of state-space heuristic search and combines the A* and RBFS algorithms to yield a hybrid search algorithm, whose decision policy is learned using the Least-Squares Policy Iteration (LSPI) algorithm. Our benchmark problem domain involves shortest path finding problems in a real-world dataset encoding the entire street network of the District of Columbia (DC), USA. The derived hybrid algorithm exhibits better performance results than the individual algorithms in the majority of cases according to a variety of performance criteria balancing time and memory. It is noted that the proposed methodology is generic, can be applied to a variety of other problems, and requires no prior knowledge about the individual algorithms used or the properties of the underlying problem instances being solved.  相似文献   
109.
Many cryptographic primitives that are used in cryptographic schemes and security protocols such as SET, PKI, IPSec, and VPNs utilize hash functions, which form a special family of cryptographic algorithms. Applications that use these security schemes are becoming very popular as time goes by and this means that some of these applications call for higher throughput either due to their rapid acceptance by the market or due to their nature. In this work, a new methodology is presented for achieving high operating frequency and throughput for the implementations of all widely used—and those expected to be used in the near future—hash functions such as MD-5, SHA-1, RIPEMD (all versions), SHA-256, SHA-384, SHA-512, and so forth. In the proposed methodology, five different techniques have been developed and combined with the finest way so as to achieve the maximum performance. Compared to conventional pipelined implementations of hash functions (in FPGAs), the proposed methodology can lead even to a 160 percent throughput increase.  相似文献   
110.
This work introduces a metaheuristic method for the reconstruction of the DNA string from its l-mer content in the presence of large amounts of positive and negative errors. The procedure consists of three parts: the formulation of the problem as an asymmetric traveling salesman problem (ATSP), a technique for handling the positive errors and an optimization algorithm that solves the formulated problem. The optimization algorithm is a variation of the threshold accepting method with intense local search and its function is controlled by a size diminishing shell. The optimization algorithm is used consecutively on ATSPs of continuously decreasing sizes till it reaches a final solution. The proposed method provides solutions of better quality compared to algorithms in the recent bibliography.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号