首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   3003篇
  免费   75篇
  国内免费   3篇
电工技术   36篇
综合类   3篇
化学工业   630篇
金属工艺   34篇
机械仪表   37篇
建筑科学   197篇
矿业工程   21篇
能源动力   67篇
轻工业   168篇
水利工程   27篇
石油天然气   48篇
无线电   171篇
一般工业技术   386篇
冶金工业   817篇
原子能技术   20篇
自动化技术   419篇
  2022年   22篇
  2021年   35篇
  2019年   34篇
  2018年   36篇
  2017年   54篇
  2016年   48篇
  2015年   42篇
  2014年   43篇
  2013年   179篇
  2012年   72篇
  2011年   105篇
  2010年   83篇
  2009年   95篇
  2008年   108篇
  2007年   109篇
  2006年   95篇
  2005年   112篇
  2004年   76篇
  2003年   97篇
  2002年   69篇
  2001年   74篇
  2000年   51篇
  1999年   64篇
  1998年   98篇
  1997年   66篇
  1996年   71篇
  1995年   46篇
  1994年   40篇
  1993年   40篇
  1992年   38篇
  1991年   32篇
  1990年   53篇
  1989年   46篇
  1988年   47篇
  1987年   52篇
  1986年   35篇
  1985年   49篇
  1984年   39篇
  1983年   35篇
  1982年   38篇
  1981年   32篇
  1980年   26篇
  1979年   34篇
  1978年   34篇
  1977年   30篇
  1976年   49篇
  1975年   30篇
  1974年   32篇
  1973年   42篇
  1972年   23篇
排序方式: 共有3081条查询结果,搜索用时 515 毫秒
51.
Blockchain has recently emerged as a research trend, with potential applications in a broad range of industries and context. One particular successful Blockchain technology is smart contract, which is widely used in commercial settings (e.g., high value financial transactions). This, however, has security implications due to the potential to financially benefit froma security incident (e.g., identification and exploitation of a vulnerability in the smart contract or its implementation). Among, Ethereum is the most active and arresting. Hence, in this paper, we systematically review existing research efforts on Ethereum smart contract security, published between 2015 and 2019. Specifically, we focus on how smart contracts can be maliciously exploited and targeted, such as security issues of contract program model, vulnerabilities in the program and safety consideration introduced by program execution environment. We also identify potential research opportunities and future research agenda.  相似文献   
52.
53.
54.
Greedy scheduling heuristics provide a low complexity and scalable albeit particularly sub-optimal strategy for hardware-based crossbar schedulers. In contrast, the maximum matching algorithm for Bipartite graphs can be used to provide optimal scheduling for crossbar-based interconnection networks with a significant complexity and scalability cost. In this paper, we show how maximum matching can be reformulated in terms of Boolean operations rather than the more traditional formulations. By leveraging the inherent parallelism available in custom hardware design, we reformulate maximum matching in terms of Boolean operations rather than matrix computations and introduce three maximum matching implementations in hardware. Specifically, we examine a Pure Logic Scheduler with three dimensions of parallelism, a Matrix Scheduler with two dimensions of parallelism and a Vector Scheduler with one dimension of parallelism. These designs reduce the algorithmic complexity for an N×NN×N network from O(N3)O(N3) to O(1)O(1), O(K)O(K), and O(KN)O(KN), respectively, where KK is the number of optimization steps. While an optimal scheduling algorithm requires K=2N−1K=2N1 steps, by starting with our hardware-based greedy strategy to generate an initial schedule, our simulation results show that the maximum matching scheduler can achieve 99% of the optimal schedule when K=9K=9. We examine hardware and time complexity of these architectures for crossbar sizes of up to N=1024N=1024. Using FPGA synthesis results, we show that a greedy schedule for crossbars, ranging from 8×8 to 256×256, can be optimized in less than 20 ns per optimization step. For crossbars reaching 1024×1024 the scheduling can be completed in approximately 10 μs with current technology and could reach under 90 ns with future technologies.  相似文献   
55.
We propose a finite structural translation of possibly recursive π-calculus terms into Petri nets. This is achieved by using high-level nets together with an equivalence on markings in order to model entering into recursive calls, which do not need to be guarded. We view a computing system as consisting of a main program (π-calculus term) together with procedure declarations (recursive definitions of π-calculus identifiers). The control structure of these components is represented using disjoint high-level Petri nets, one for the main program and one for each of the procedure declarations. The program is executed once, while each procedure can be invoked several times (even concurrently), each such invocation being uniquely identified by structured tokens which correspond to the sequence of recursive calls along the execution path leading to that invocation.  相似文献   
56.
In the 1990s, enrollments grew rapidly in information systems (IS) and computer science. Then, beginning in 2000 and 2001, enrollments declined precipitously. This paper looks at the enrollment bubble and the dotcom bubble that drove IT enrollments. Although the enrollment bubble occurred worldwide, this paper focuses primarily on U.S. data, which is widely available, and secondarily on Western Europe data. The paper notes that the dotcom bubble was an investment disaster but that U.S. IT employment fell surprisingly little and soon surpassed the bubble's peak IT employment. In addition, U.S. IT unemployment rose to almost the level of total unemployment in 2003, then fell to traditional low levels by 2005. Job prospects in the U.S. and most other countries are good for the short term, and the U.S. Bureau of Labor Statistics employment projections for 2006–2016 indicate that job prospects in the U.S. will continue to be good for most IT jobs. However, offshoring is a persistent concern for students in Western Europe and the United States. The data on offshoring are of poor quality, but several studies indicate that IT job losses from offshoring are small and may be counterbalanced by gains in IT inshoring jobs. At the same time, offshoring and productivity gains appear to be making low-level jobs such as programming and user support less attractive. This means that IS and computer science programs will have to focus on producing higher-level job skills among graduates. In addition, students may have to stop considering the undergraduate degree to be a terminal degree in IS and computer science.  相似文献   
57.
The majority of existing escrowable identity-based key agreement protocols only provide partial forward secrecy. Such protocols are, arguably, not suitable for many real-word applications, as the latter tends to require a stronger sense of forward secrecy—perfect forward secrecy. In this paper, we propose an efficient perfect forward-secure identity-based key agreement protocol in the escrow mode. We prove the security of our protocol in the random oracle model, assuming the intractability of the Gap Biline...  相似文献   
58.
Nucleating agents increase the impact strength, tensile strength, and tensile elasticity modulus of semicrystalline polymers. Nucleating agents also decrease product cycle times, resulting in a cost savings per product unit. We have synthesized and tested 15 compounds as nucleactors for polypropylene. Of these, trinapthylidene sorbitol, tri-(4-methyl-1-naphthylidene)sorbitol, tri-(4-methoxy-1-naphthylidene) sorbitol, and dibenzylidene xylitol are efficient nucleators of polypropylene. Trinaphthylidene sorbitol (tns) has two major diastereomers: The “S” diastereomer yields a faster crystallization rate for polypropylene than does the commercial nucleator dibenzylidene sorbitol (Millad 3905). Crystallization rates are 208 and 88, respectively (t min?1 × 1000). The “R” diastereomer, however, is a poor nucleator and interferes with the nucleating activity of the “S” diastereomer. A 52/48 mixture of diastereomers does not nucleate polypropylene, even at twice the concentration. This is first time that the importance of stereochemistry has been demonstrated in the nucleating action. © 1994 John Wiley & Sons, Inc.  相似文献   
59.
Zhang  Zhimin  Ning  Huansheng  Shi  Feifei  Farha  Fadi  Xu  Yang  Xu  Jiabo  Zhang  Fan  Choo  Kim-Kwang Raymond 《Artificial Intelligence Review》2022,55(2):1029-1053
Artificial Intelligence Review - In recent times, there have been attempts to leverage artificial intelligence (AI) techniques in a broad range of cyber security applications. Therefore, this paper...  相似文献   
60.
Pinch analysis was initially developed as a methodology for optimizing energy efficiency in process plants. Applications of pinch analysis applications are based on common principles of using stream quantity and quality to determine optimal system targets. This initial targeting step identifies the pinch point, which then allows complex problems to be decomposed for the subsequent design of an optimal network using insights drawn from the targeting stage. One important class of pinch analysis problems is energy planning with footprint constraints, which began with the development of carbon emissions pinch analysis; in such problems, energy sources and demands are characterized by carbon footprint as the quality index. This methodology has been extended by using alternative quality indexes that measure different sustainability dimensions, such as water footprint, land footprint, emergy transformity, inoperability risk, energy return on investment and human fatalities. Pinch analysis variants still have the limitation of being able to use one quality index at a time, while previous attempts to develop pinch analysis methods using multiple indices have only been partially successful for special cases. In this work, a multiple-index pinch analysis method is developed by using an aggregate quality index, based on a weighted linear function of different quality indexes normally used in energy planning. The weights used to compute the aggregate index are determined via the analytic hierarchy process. A case study for Indian power sector is solved to illustrate how this approach allows multiple sustainability dimensions to be accounted for in energy planning.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号