首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   4819篇
  免费   78篇
  国内免费   5篇
电工技术   36篇
综合类   5篇
化学工业   642篇
金属工艺   38篇
机械仪表   37篇
建筑科学   197篇
矿业工程   21篇
能源动力   62篇
轻工业   169篇
水利工程   27篇
石油天然气   50篇
无线电   171篇
一般工业技术   386篇
冶金工业   2618篇
原子能技术   19篇
自动化技术   424篇
  2022年   27篇
  2021年   35篇
  2019年   37篇
  2018年   38篇
  2017年   54篇
  2016年   50篇
  2015年   44篇
  2014年   42篇
  2013年   182篇
  2012年   72篇
  2011年   105篇
  2010年   83篇
  2009年   96篇
  2008年   105篇
  2007年   109篇
  2006年   97篇
  2005年   112篇
  2004年   76篇
  2003年   98篇
  2002年   69篇
  2001年   74篇
  2000年   51篇
  1999年   112篇
  1998年   706篇
  1997年   402篇
  1996年   254篇
  1995年   171篇
  1994年   136篇
  1993年   146篇
  1992年   50篇
  1991年   63篇
  1990年   79篇
  1989年   74篇
  1988年   72篇
  1987年   77篇
  1986年   55篇
  1985年   65篇
  1984年   39篇
  1983年   38篇
  1982年   43篇
  1981年   35篇
  1980年   39篇
  1979年   34篇
  1978年   38篇
  1977年   51篇
  1976年   121篇
  1975年   30篇
  1974年   32篇
  1973年   43篇
  1972年   23篇
排序方式: 共有4902条查询结果,搜索用时 15 毫秒
101.
102.
Phenomenological softening points were measured on a series of 13 anionic, nearly monodisperse, atactic polystyrenes using a DuPont 943 thermomechanical analyzer (TMA) in a penetration mode. Although TMA cannot identify the nature of the “transition” observed as such, the results obtained support the evidence for the Tg, Tll, and Tll transition in polystyrene discussed in recent literature. Tg and Tll were found to vary with molecular weight in a systematic manner, while Tll could only be observed at very high molecular weight. The technique appears to be quite useful in offering rapid and reproducible information on the various transitions in the liquid state of polystyrene.  相似文献   
103.
A molecular model of the binding site of an anti-carbohydrateantibody (YsT9.1) has been developed using computer-assistedmodeling techniques and molecular dynamics calculations. Sequencehomologies among YsT9.1 and the Fv regions of McPC603, J539and human Bence-Jones protein REI, all of which have solvedcrystal structures, provided the basis for the modeling. Thegroove-type combining site model had a topography which wascomplementary to low energy confonners of the polysaccharide,a Brucella O-antigen, and the site could be almost completelyfilled by a pentasaccharide epitope in either of two dockingmodes. Putative interactions between this epitope and the antibodyare consistent with the known structural requirements for bindingand lead to the design of oligosaccharide inhibitors that probethe veracity of the modeled docked complex. Ultimately boththe Fv model and the docked complex will be compared with independentcrystal structures of YsT9.1 Fab with and without pentasaccharideinhibitor, currently at the stage of refinement.  相似文献   
104.

Fraudulent online sellers often collude with reviewers to garner fake reviews for their products. This act undermines the trust of buyers in product reviews, and potentially reduces the effectiveness of online markets. Being able to accurately detect fake reviews is, therefore, critical. In this study, we investigate several preprocessing and textual-based featuring methods along with machine learning classifiers, including single and ensemble models, to build a fake review detection system. Given the nature of product review data, where the number of fake reviews is far less than that of genuine reviews, we look into the results of each class in detail in addition to the overall results. We recognise from our preliminary analysis that, owing to imbalanced data, there is a high imbalance between the accuracies for different classes (e.g., 1.3% for the fake review class and 99.7% for the genuine review class), despite the overall accuracy looking promising (around 89.7%). We propose two dynamic random sampling techniques that are possible for textual-based featuring methods to solve this class imbalance problem. Our results indicate that both sampling techniques can improve the accuracy of the fake review class—for balanced datasets, the accuracies can be improved to a maximum of 84.5% and 75.6% for random under and over-sampling, respectively. However, the accuracies for genuine reviews decrease to 75% and 58.8% for random under and over-sampling, respectively. We also discover that, for smaller datasets, the Adaptive Boosting ensemble model outperforms other single classifiers; whereas, for larger datasets, the performance improvement from ensemble models is insignificant compared to the best results obtained by single classifiers.

  相似文献   
105.
Blockchain has recently emerged as a research trend, with potential applications in a broad range of industries and context. One particular successful Blockchain technology is smart contract, which is widely used in commercial settings (e.g., high value financial transactions). This, however, has security implications due to the potential to financially benefit froma security incident (e.g., identification and exploitation of a vulnerability in the smart contract or its implementation). Among, Ethereum is the most active and arresting. Hence, in this paper, we systematically review existing research efforts on Ethereum smart contract security, published between 2015 and 2019. Specifically, we focus on how smart contracts can be maliciously exploited and targeted, such as security issues of contract program model, vulnerabilities in the program and safety consideration introduced by program execution environment. We also identify potential research opportunities and future research agenda.  相似文献   
106.
107.
108.
Greedy scheduling heuristics provide a low complexity and scalable albeit particularly sub-optimal strategy for hardware-based crossbar schedulers. In contrast, the maximum matching algorithm for Bipartite graphs can be used to provide optimal scheduling for crossbar-based interconnection networks with a significant complexity and scalability cost. In this paper, we show how maximum matching can be reformulated in terms of Boolean operations rather than the more traditional formulations. By leveraging the inherent parallelism available in custom hardware design, we reformulate maximum matching in terms of Boolean operations rather than matrix computations and introduce three maximum matching implementations in hardware. Specifically, we examine a Pure Logic Scheduler with three dimensions of parallelism, a Matrix Scheduler with two dimensions of parallelism and a Vector Scheduler with one dimension of parallelism. These designs reduce the algorithmic complexity for an N×NN×N network from O(N3)O(N3) to O(1)O(1), O(K)O(K), and O(KN)O(KN), respectively, where KK is the number of optimization steps. While an optimal scheduling algorithm requires K=2N−1K=2N1 steps, by starting with our hardware-based greedy strategy to generate an initial schedule, our simulation results show that the maximum matching scheduler can achieve 99% of the optimal schedule when K=9K=9. We examine hardware and time complexity of these architectures for crossbar sizes of up to N=1024N=1024. Using FPGA synthesis results, we show that a greedy schedule for crossbars, ranging from 8×8 to 256×256, can be optimized in less than 20 ns per optimization step. For crossbars reaching 1024×1024 the scheduling can be completed in approximately 10 μs with current technology and could reach under 90 ns with future technologies.  相似文献   
109.
We propose a finite structural translation of possibly recursive π-calculus terms into Petri nets. This is achieved by using high-level nets together with an equivalence on markings in order to model entering into recursive calls, which do not need to be guarded. We view a computing system as consisting of a main program (π-calculus term) together with procedure declarations (recursive definitions of π-calculus identifiers). The control structure of these components is represented using disjoint high-level Petri nets, one for the main program and one for each of the procedure declarations. The program is executed once, while each procedure can be invoked several times (even concurrently), each such invocation being uniquely identified by structured tokens which correspond to the sequence of recursive calls along the execution path leading to that invocation.  相似文献   
110.
In the 1990s, enrollments grew rapidly in information systems (IS) and computer science. Then, beginning in 2000 and 2001, enrollments declined precipitously. This paper looks at the enrollment bubble and the dotcom bubble that drove IT enrollments. Although the enrollment bubble occurred worldwide, this paper focuses primarily on U.S. data, which is widely available, and secondarily on Western Europe data. The paper notes that the dotcom bubble was an investment disaster but that U.S. IT employment fell surprisingly little and soon surpassed the bubble's peak IT employment. In addition, U.S. IT unemployment rose to almost the level of total unemployment in 2003, then fell to traditional low levels by 2005. Job prospects in the U.S. and most other countries are good for the short term, and the U.S. Bureau of Labor Statistics employment projections for 2006–2016 indicate that job prospects in the U.S. will continue to be good for most IT jobs. However, offshoring is a persistent concern for students in Western Europe and the United States. The data on offshoring are of poor quality, but several studies indicate that IT job losses from offshoring are small and may be counterbalanced by gains in IT inshoring jobs. At the same time, offshoring and productivity gains appear to be making low-level jobs such as programming and user support less attractive. This means that IS and computer science programs will have to focus on producing higher-level job skills among graduates. In addition, students may have to stop considering the undergraduate degree to be a terminal degree in IS and computer science.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号