首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   44531篇
  免费   1060篇
  国内免费   186篇
电工技术   545篇
综合类   632篇
化学工业   5033篇
金属工艺   604篇
机械仪表   900篇
建筑科学   901篇
矿业工程   393篇
能源动力   474篇
轻工业   2353篇
水利工程   632篇
石油天然气   55篇
武器工业   4篇
无线电   1955篇
一般工业技术   3607篇
冶金工业   21542篇
原子能技术   192篇
自动化技术   5955篇
  2024年   66篇
  2023年   217篇
  2022年   197篇
  2021年   217篇
  2020年   84篇
  2019年   137篇
  2018年   546篇
  2017年   767篇
  2016年   1160篇
  2015年   843篇
  2014年   513篇
  2013年   560篇
  2012年   2241篇
  2011年   2585篇
  2010年   747篇
  2009年   839篇
  2008年   687篇
  2007年   700篇
  2006年   618篇
  2005年   3373篇
  2004年   2583篇
  2003年   2066篇
  2002年   875篇
  2001年   742篇
  2000年   285篇
  1999年   633篇
  1998年   6181篇
  1997年   3824篇
  1996年   2516篇
  1995年   1462篇
  1994年   1071篇
  1993年   1104篇
  1992年   247篇
  1991年   302篇
  1990年   311篇
  1989年   275篇
  1988年   295篇
  1987年   222篇
  1986年   199篇
  1985年   172篇
  1984年   71篇
  1983年   84篇
  1982年   129篇
  1981年   180篇
  1980年   190篇
  1979年   60篇
  1978年   100篇
  1977年   609篇
  1976年   1320篇
  1975年   100篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
991.
The plethora of comparison shopping agents (CSAs) in today’s markets enables buyers to query more than a single CSA when shopping, thus expanding the list of sellers whose prices they obtain. This potentially decreases the chance of a purchase within any single interaction between a buyer and a CSA, and consequently decreases each CSAs’ expected revenue per-query. Obviously, a CSA can improve its competence in such settings by acquiring more sellers’ prices, potentially resulting in a more attractive “best price”. In this paper we suggest a complementary approach that improves the attractiveness of the best result returned based on intelligently controlling the order according to which they are presented to the user, in a way that utilizes several known cognitive-biases of human buyers. The advantage of this approach is in its ability to affect the buyer’s tendency to terminate her search for a better price, hence avoid querying further CSAs, without spending valuable resources on finding additional prices to present. The effectiveness of our method is demonstrated using real data, collected from four CSAs for five products. Our experiments confirm that the suggested method effectively influence people in a way that is highly advantageous to the CSA compared to the common method for presenting the prices. Furthermore, we experimentally show that all of the components of our method are essential to its success.  相似文献   
992.
Scientific communities have adopted different conventions for ordering authors on publications. Are these choices inconsequential, or do they have significant influence on individual authors, the quality of the projects completed, and research communities at large? What are the trade-offs of using one convention over another? In order to investigate these questions, we formulate a basic two-player game theoretic model, which already illustrates interesting phenomena that can occur in more realistic settings. We find that contribution-based ordering leads to a denser collaboration network and a greater number of publications, while alphabetical ordering can improve research quality. Contrary to the assumption that free riding is a weakness of the alphabetical ordering scheme, when there are only two authors, this phenomenon can occur under any contribution scheme, and the worst case occurs under contribution-based ordering. Finally, we show how authors working on multiple projects can cooperate to attain optimal research quality and eliminate free riding given either contribution scheme.  相似文献   
993.
Drawing on the organizational legitimacy literature, we propose and test a set of hypotheses arguing that social media, an artifact of information technology, serves as a mechanism for conferring legitimacy in the market for initial public offerings (IPOs). The extant literature identifies third-party authorities, such as traditional media outlets and industry analysts, as a valuable source of organizational legitimacy. Social media and micro-blogging, however, remain outside current classifications of the phenomenon. This study theoretically develops and empirically tests a new mechanism in the legitimation process: the direct interaction with potential investors and society-at-large via social media. Our findings indicate broad support that having a Twitter account, and the extent to which a firm utilizes Twitter, is associated with systematically higher levels of IPO underpricing. Moreover, we find support that social media variables external to the firm, including number of followers and retweets, also impact the level of underpricing in an IPO. In conclusion, we highlight the emerging role of social media in processes of legitimation and invite additional research at the intersection of IS and management.  相似文献   
994.
995.
Big data is being implemented with success in the private sector and science. Yet the public sector seems to be falling behind, despite the potential value of big data for government. Government organizations do recognize the opportunities of big data but seem uncertain about whether they are ready for the introduction of big data, and if they are adequately equipped to use big data. This paper addresses those uncertainties. It presents an assessment framework for evaluating public organizations’ big data readiness. Doing so demystifies the concept of big data, as it is expressed in terms of specific and measureable organizational characteristics. The framework was tested by applying it to organizations in the Dutch public sector. The results suggest that organizations may be technically capable of using big data, but they will not significantly gain from these activities if the applications do not fit their organizations and main statutory tasks. The framework proved helpful in pointing out areas where public sector organizations could improve, providing guidance on how government can become more big data ready in the future.  相似文献   
996.
One of the problems with insider threat research is the lack of a complete 360° view of an insider threat dataset due to inadequate experimental design. This has prevented us from modeling a computational system to protect against insider threat situations. This paper provides a contemporary methodological approach for using online games to simulate insider betrayal for predictive behavioral research. The Leader’s Dilemma Game simulates an insider betrayal scenario for analyzing organizational trust relationships, providing an opportunity to examine the trustworthiness of focal individuals, as measured by humans as sensors engaging in computer-mediated communication. This experimental design provides a window into trustworthiness attribution that can generate a rigorous and relevant behavioral dataset, and contributes to building a cyber laboratory that advances future insider threat study.  相似文献   
997.
Since today’s real-world graphs, such as social network graphs, are evolving all the time, it is of great importance to perform graph computations and analysis in these dynamic graphs. Due to the fact that many applications such as social network link analysis with the existence of inactive users need to handle failed links or nodes, decremental computation and maintenance for graphs is considered a challenging problem. Shortest path computation is one of the most fundamental operations for managing and analyzing large graphs. A number of indexing methods have been proposed to answer distance queries in static graphs. Unfortunately, there is little work on answering such queries for dynamic graphs. In this paper, we focus on the problem of computing the shortest path distance in dynamic graphs, particularly on decremental updates (i.e., edge deletions). We propose maintenance algorithms based on distance labeling, which can handle decremental updates efficiently. By exploiting properties of distance labeling in original graphs, we are able to efficiently maintain distance labeling for new graphs. We experimentally evaluate our algorithms using eleven real-world large graphs and confirm the effectiveness and efficiency of our approach. More specifically, our method can speed up index re-computation by up to an order of magnitude compared with the state-of-the-art method, Pruned Landmark Labeling (PLL).  相似文献   
998.
Multi-agent systems (MAS) literature often assumes decentralized MAS to be especially suited for dynamic and large scale problems. In operational research, however, the prevailing paradigm is the use of centralized algorithms. Present paper empirically evaluates whether a multi-agent system can outperform a centralized algorithm in dynamic and large scale logistics problems. This evaluation is novel in three aspects: (1) to ensure fairness both implementations are subject to the same constraints with respect to hardware resources and software limitations, (2) the implementations are systematically evaluated with varying problem properties, and (3) all code is open source, facilitating reproduction and extension of the experiments. Existing work lacks a systematic evaluation of centralized versus decentralized paradigms due to the absence of a real-time logistics simulator with support for both paradigms and a dataset of problem instances with varying properties. We extended an existing logistics simulator to be able to perform real-time experiments and we use a recent dataset of dynamic pickup-and-delivery problem with time windows instances with varying levels of dynamism, urgency, and scale. The OptaPlanner constraint satisfaction solver is used in a centralized way to compute a global schedule and used as part of a decentralized MAS based on the dynamic contract-net protocol (DynCNET) algorithm. The experiments show that the DynCNET MAS finds solutions with a relatively lower operating cost when a problem has all following three properties: medium to high dynamism, high urgency, and medium to large scale. In these circumstances, the centralized algorithm finds solutions with an average cost of 112.3% of the solutions found by the MAS. However, averaged over all scenario types, the average cost of the centralized algorithm is 94.2%. The results indicate that the MAS performs best on very urgent problems that are medium to large scale.  相似文献   
999.
We present an additional feature to the Challenge Handshake Authentication Protocol. It makes the protocol resilient to offline brute-force/dictionary attacks. We base our contribution to the protocol on the concept of a rewrite complement for ground term rewrite systems (GTRSs). We also introduce and study the notion of a type-based complement which is a special case of a rewrite complement. We show the following decision results. Given GTRSs A, C, and a reduced GTRS B over some ranked alphabet ??, one can decide whether C is a type-based complement of A for B. Given a GTRS A and a reduced GTRS B over some ranked alphabet ??, one can decide whether there is a GTRS C such that C is a type-based complement of A for B. If the answer is yes, then we can construct such a GTRS C.  相似文献   
1000.
We propose a novel algorithm, called REGGAE, for the generation of momenta of a given sample of particle masses, evenly distributed in Lorentz-invariant phase space and obeying energy and momentum conservation. In comparison to other existing algorithms, REGGAE is designed for the use in multiparticle production in hadronic and nuclear collisions where many hadrons are produced and a large part of the available energy is stored in the form of their masses. The algorithm uses a loop simulating multiple collisions which lead to production of configurations with reasonably large weights.

Program summary

Program title: REGGAE (REscattering-after-Genbod GenerAtor of Events)Catalogue identifier: AEJR_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEJR_v1_0.htmlProgram obtainable from: CPC Program Library, Queen?s University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 1523No. of bytes in distributed program, including test data, etc.: 9608Distribution format: tar.gzProgramming language: C++Computer: PC Pentium 4, though no particular tuning for this machine was performed.Operating system: Originally designed on Linux PC with g++, but it has been compiled and ran successfully on OS X with g++ and MS Windows with Microsoft Visual C++ 2008 Express Edition, as well.RAM: This depends on the number of particles which are generated. For 10 particles like in the attached example it requires about 120 kB.Classification: 11.2Nature of problem: The task is to generate momenta of a sample of particles with given masses which obey energy and momentum conservation. Generated samples should be evenly distributed in the available Lorentz-invariant phase space.Solution method: In general, the algorithm works in two steps. First, all momenta are generated with the GENBOD algorithm. There, particle production is modeled as a sequence of two-body decays of heavy resonances. After all momenta are generated this way, they are reshuffled. Each particle undergoes a collision with some other partner such that in the pair center of mass system the new directions of momenta are distributed isotropically. After each particle collides only a few times, the momenta are distributed evenly across the whole available phase space. Starting with GENBOD is not essential for the procedure but it improves the performance.Running time: This depends on the number of particles and number of events one wants to generate. On a LINUX PC with 2 GHz processor, generation of 1000 events with 10 particles each takes about 3 s.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号