全文获取类型
收费全文 | 52658篇 |
免费 | 3007篇 |
国内免费 | 55篇 |
专业分类
电工技术 | 442篇 |
综合类 | 40篇 |
化学工业 | 10642篇 |
金属工艺 | 989篇 |
机械仪表 | 1035篇 |
建筑科学 | 1705篇 |
矿业工程 | 89篇 |
能源动力 | 1307篇 |
轻工业 | 7306篇 |
水利工程 | 497篇 |
石油天然气 | 207篇 |
武器工业 | 4篇 |
无线电 | 3067篇 |
一般工业技术 | 9046篇 |
冶金工业 | 11630篇 |
原子能技术 | 317篇 |
自动化技术 | 7397篇 |
出版年
2023年 | 422篇 |
2022年 | 562篇 |
2021年 | 1383篇 |
2020年 | 984篇 |
2019年 | 1007篇 |
2018年 | 1923篇 |
2017年 | 1890篇 |
2016年 | 2012篇 |
2015年 | 1441篇 |
2014年 | 1914篇 |
2013年 | 3477篇 |
2012年 | 3064篇 |
2011年 | 3150篇 |
2010年 | 2375篇 |
2009年 | 2230篇 |
2008年 | 2393篇 |
2007年 | 2073篇 |
2006年 | 1652篇 |
2005年 | 1372篇 |
2004年 | 1252篇 |
2003年 | 1128篇 |
2002年 | 1011篇 |
2001年 | 664篇 |
2000年 | 649篇 |
1999年 | 808篇 |
1998年 | 3816篇 |
1997年 | 2321篇 |
1996年 | 1542篇 |
1995年 | 910篇 |
1994年 | 770篇 |
1993年 | 752篇 |
1992年 | 271篇 |
1991年 | 285篇 |
1990年 | 280篇 |
1989年 | 258篇 |
1988年 | 228篇 |
1987年 | 205篇 |
1986年 | 225篇 |
1985年 | 254篇 |
1984年 | 220篇 |
1983年 | 167篇 |
1982年 | 198篇 |
1981年 | 211篇 |
1980年 | 183篇 |
1979年 | 141篇 |
1978年 | 132篇 |
1977年 | 236篇 |
1976年 | 471篇 |
1975年 | 96篇 |
1973年 | 98篇 |
排序方式: 共有10000条查询结果,搜索用时 734 毫秒
991.
992.
Antoine Rozenknop Roberto Wolfler Calvo Laurent Alfandari Daniel Chemla Lucas Létocart 《Journal of Scheduling》2013,16(6):585-604
This paper presents a heuristic method based on column generation for the EDF (Electricité De France) long-term electricity production planning problem proposed as subject of the ROADEF/EURO 2010 Challenge. This is to our knowledge the first-ranked method among those methods based on mathematical programming, and was ranked fourth overall. The problem consists in determining a production plan over the whole time horizon for each thermal power plant of the French electricity company, and for nuclear plants, a schedule of plant outages which are necessary for refueling and maintenance operations. The average cost of the overall outage and production planning, computed over a set of demand scenarios, is to be minimized. The method proceeds in two stages. In the first stage, dates for outages are fixed once for all for each nuclear plant. Data are aggregated with a single average scenario and reduced time steps, and a set-partitioning reformulation of this aggregated problem is solved for fixing outage dates with a heuristic based on column generation. The pricing problem associated with each nuclear plant is a shortest path problem in an appropriately constructed graph. In the second stage, the reload level is determined at each date of an outage, considering now all scenarios. Finally, the production quantities between two outages are optimized for each plant and each scenario by solving independent linear programming problems. 相似文献
993.
As telecommunication networks evolve rapidly in terms of scalability, complexity, and heterogeneity, the efficiency of fault localization procedures and the accuracy in the detection of anomalous behaviors are becoming important factors that largely influence the decision making process in large management companies. For this reason, telecommunication companies are doing a big effort investing in new technologies and projects aimed at finding efficient management solutions. One of the challenging issues for network and system management operators is that of dealing with the huge amount of alerts generated by the managed systems and networks. In order to discover anomalous behaviors and speed up fault localization processes, alert correlation is one of the most popular resources. Although many different alert correlation techniques have been investigated, it is still an active research field. In this paper, a survey of the state of the art in alert correlation techniques is presented. Unlike other authors, we consider that the correlation process is a common problem for different fields in the industry. Thus, we focus on showing the broad influence of this problem. Additionally, we suggest an alert correlation architecture capable of modeling current and prospective proposals. Finally, we also review some of the most important commercial products currently available. 相似文献
994.
Péter Babarczi Gergely Biczók Harald Øverby János Tapolcai Péter Soproni 《Computer Networks》2013,57(9):1974-1990
Communication networks have to provide a high level of availability and instantaneous recovery after failures in order to ensure sufficient survivability for mission-critical services. Currently, dedicated path protection (or 1 + 1) is implemented in backbone networks to provide the necessary resilience and instantaneous recovery against single link failures with remarkable simplicity. However, in order to satisfy strict availability requirements, connections also have to be resilient against Shared Risk Link Group (SRLG) failures. In addition, switching matrix reconfigurations have to be avoided after a failure in order to guarantee instantaneous recovery. For this purpose, there are several possible realization strategies improving the characteristics of traditional 1 + 1 path protection by lowering reserved bandwidth while conserving all its favorable properties. These methods either utilize diversity coding, network coding, or generalize the disjoint-path constraint of 1 + 1.In this paper, we consider the cost aspect of the traditional and the alternative 1 + 1 realization strategies. We evaluate the bandwidth cost of different schemes both analytically and empirically in realistic network topologies. As the more complex realizations lead to NP-complete problems even in the single link failure case, we propose both Integer Linear Programming (ILP) based optimal methods, as well as heuristic and meta-heuristic approaches to solve them. Our findings provide a tool and guidelines for service providers for selecting the path protection method with the lowest bandwidth cost for their network corresponding to a given level of reliability. 相似文献
995.
Jesús M. Hermida Santiago Meliá Andrés Montoyo Jaime Gómez 《Information Systems Frontiers》2013,15(3):411-431
Business Intelligence (BI) applications have been gradually ported to the Web in search of a global platform for the consumption and publication of data and services. On the Internet, apart from techniques for data/knowledge management, BI Web applications need interfaces with a high level of interoperability (similar to the traditional desktop interfaces) for the visualisation of data/knowledge. In some cases, this has been provided by Rich Internet Applications (RIA). The development of these BI RIAs is a process traditionally performed manually and, given the complexity of the final application, it is a process which might be prone to errors. The application of model-driven engineering techniques can reduce the cost of development and maintenance (in terms of time and resources) of these applications, as they demonstrated by other types of Web applications. In the light of these issues, the paper introduces the Sm4RIA-B methodology, i.e., a model-driven methodology for the development of RIA as BI Web applications. In order to overcome the limitations of RIA regarding knowledge management from the Web, this paper also presents a new RIA platform for BI, called RI@BI, which extends the functionalities of traditional RIAs by means of Semantic Web technologies and B2B techniques. Finally, we evaluate the whole approach on a case study—the development of a social network site for an enterprise project manager. 相似文献
996.
Today, the Web is the largest source of information worldwide. There is currently a strong trend for decision-making applications such as Data Warehousing (DW) and Business Intelligence (BI) to move onto the Web, especially in the cloud. Integrating data into DW/BI applications is a critical and time-consuming task. To make better decisions in DW/BI applications, next generation data integration poses new requirements to data integration systems, over those posed by traditional data integration. In this paper, we propose a generic, metadata-based, service-oriented, and event-driven approach for integrating Web data timely and autonomously. Beside handling data heterogeneity, distribution and interoperability, our approach satisfies near real-time requirements and realize active data integration. For this sake, we design and develop a framework that utilizes Web standards (e.g., XML and Web services) for tackling data heterogeneity, distribution and interoperability issues. Moreover, our framework utilizes Active XML (AXML) to warehouse passive data as well as services to integrate active and dynamic data on-the-fly. AXML embedded services and changes detection services ensure near real-time data integration. Furthermore, the idea of integrating Web data actively and autonomously revolves around mining events logged by the data integration environment. Therefore, we propose an incremental XML-based algorithm for mining association rules from logged events. Then, we define active rules dynamically upon mined data to automate and reactivate integration tasks. Finally, as a proof of concept, we implement a framework prototype as a Web application using open-source tools. 相似文献
997.
Lu??s Antunes Armando Matos Alexandre Pinto Andr?? Souto Andreia Teixeira 《Theory of Computing Systems》2013,52(1):162-178
We prove several results relating injective one-way functions, time-bounded conditional Kolmogorov complexity, and time-bounded conditional entropy. First we establish a connection between injective, strong and weak one-way functions and the expected value of the polynomial time-bounded Kolmogorov complexity, denoted here by?E(K t (x|f(x))). These results are in both directions. More precisely, conditions on?E(K t (x|f(x))) that imply that?f is a weak one-way function, and properties of?E(K t (x|f(x))) that are implied by the fact that?f is a strong one-way function. In particular, we prove a separation result: based on the concept of time-bounded Kolmogorov complexity, we find an interval in which every function?f is a necessarily weak but not a strong one-way function. Then we propose an individual approach to injective one-way functions based on Kolmogorov complexity, defining Kolmogorov one-way functions and prove some relationships between the new proposal and the classical definition of one-way functions, showing that a Kolmogorov one-way function is also a deterministic one-way function. A relationship between Kolmogorov one-way functions and the conjecture of polynomial time symmetry of information is also proved. Finally, we relate?E(K t (x|f(x))) and two forms of time-bounded entropy, the unpredictable entropy?H unp, in which ??one-wayness?? of a function can be easily expressed, and the Yao+ entropy, a measure based on compression/decompression schema in which only the decompressor is restricted to be time-bounded. 相似文献
998.
A C-coloured graph is a graph, that is possibly directed, where the edges are coloured with colours from the set C. Clique-width is a complexity measure for C-coloured graphs, for finite sets C. Rank-width is an equivalent complexity measure for undirected graphs and has good algorithmic and structural properties. It is in particular related to the vertex-minor relation. We discuss some possible extensions of the notion of rank-width to C-coloured graphs. There is not a unique natural notion of rank-width for C-coloured graphs. We define two notions of rank-width for them, both based on a coding of C-coloured graphs by ${\mathbb{F}}^{*}$ -graphs— $\mathbb {F}$ -coloured graphs where each edge has exactly one colour from $\mathbb{F}\setminus \{0\},\ \mathbb{F}$ a field—and named respectively $\mathbb{F}$ -rank-width and $\mathbb {F}$ -bi-rank-width. The two notions are equivalent to clique-width. We then present a notion of vertex-minor for $\mathbb{F}^{*}$ -graphs and prove that $\mathbb{F}^{*}$ -graphs of bounded $\mathbb{F}$ -rank-width are characterised by a list of $\mathbb{F}^{*}$ -graphs to exclude as vertex-minors (this list is finite if $\mathbb{F}$ is finite). An algorithm that decides in time O(n 3) whether an $\mathbb{F}^{*}$ -graph with n vertices has $\mathbb{F}$ -rank-width (resp. $\mathbb{F}$ -bi-rank-width) at most k, for fixed k and fixed finite field $\mathbb{F}$ , is also given. Graph operations to check MSOL-definable properties on $\mathbb{F}^{*}$ -graphs of bounded $\mathbb{F}$ -rank-width (resp. $\mathbb{F}$ -bi-rank-width) are presented. A specialisation of all these notions to graphs without edge colours is presented, which shows that our results generalise the ones in undirected graphs. 相似文献
999.
Nazim Fatès 《Theory of Computing Systems》2013,53(2):223-242
In the density classification problem, a binary cellular automaton (CA) should decide whether an initial configuration contains more 0s or more 1s. The answer is given when all cells of the CA agree on a given state. This problem is known for having no exact solution in the case of binary deterministic one-dimensional CA. We investigate how randomness in CA may help us solve the problem. We analyse the behaviour of stochastic CA rules that perform the density classification task. We show that describing stochastic rules as a “blend” of deterministic rules allows us to derive quantitative results on the classification time and the classification time of previously studied rules. We introduce a new rule whose effect is to spread defects and to wash them out. This stochastic rule solves the problem with an arbitrary precision, that is, its quality of classification can be made arbitrarily high, though at the price of an increase of the convergence time. We experimentally demonstrate that this rule exhibits good scaling properties and that it attains qualities of classification never reached so far. 相似文献
1000.
Anne Benoit Yves Robert Arnold L. Rosenberg Frédéric Vivien 《Theory of Computing Systems》2013,53(3):386-423
One has a large workload that is “divisible”—its constituent work’s granularity can be adjusted arbitrarily—and one has access to p remote worker computers that can assist in computing the workload. How can one best utilize the workers? Complicating this question is the fact that each worker is subject to interruptions (of known likelihood) that kill all work in progress on it. One wishes to orchestrate sharing the workload with the workers in a way that maximizes the expected amount of work completed. Strategies are presented for achieving this goal, by balancing the desire to checkpoint often—thereby decreasing the amount of vulnerable work at any point—vs. the desire to avoid the context-switching required to checkpoint. Schedules must also temper the desire to replicate work, because such replication diminishes the effective remote workforce. The current study demonstrates the accessibility of strategies that provably maximize the expected amount of work when there is only one worker (the case p=1) and, at least in an asymptotic sense, when there are two workers (the case p=2); but the study strongly suggests the intractability of exact maximization for p≥2 computers, as work replication on multiple workers joins checkpointing as a vehicle for decreasing the impact of work-killing interruptions. We respond to that challenge by developing efficient heuristics that employ both checkpointing and work replication as mechanisms for decreasing the impact of work-killing interruptions. The quality of these heuristics, in expected amount of work completed, is assessed through exhaustive simulations that use both idealized models and actual trace data. 相似文献