首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   283篇
  免费   4篇
  国内免费   1篇
电工技术   10篇
化学工业   35篇
金属工艺   10篇
机械仪表   7篇
建筑科学   33篇
能源动力   12篇
轻工业   22篇
水利工程   2篇
石油天然气   3篇
武器工业   1篇
无线电   31篇
一般工业技术   44篇
冶金工业   43篇
原子能技术   2篇
自动化技术   33篇
  2024年   1篇
  2023年   4篇
  2022年   14篇
  2021年   23篇
  2020年   8篇
  2019年   14篇
  2018年   10篇
  2017年   7篇
  2016年   7篇
  2015年   5篇
  2014年   10篇
  2013年   15篇
  2012年   10篇
  2011年   26篇
  2010年   15篇
  2009年   14篇
  2008年   13篇
  2007年   14篇
  2006年   8篇
  2005年   7篇
  2004年   8篇
  2003年   1篇
  2002年   2篇
  2000年   2篇
  1999年   3篇
  1998年   13篇
  1997年   6篇
  1996年   2篇
  1995年   2篇
  1994年   5篇
  1993年   1篇
  1992年   3篇
  1991年   1篇
  1988年   2篇
  1985年   1篇
  1984年   3篇
  1983年   2篇
  1982年   1篇
  1978年   1篇
  1975年   2篇
  1969年   1篇
  1959年   1篇
排序方式: 共有288条查询结果,搜索用时 31 毫秒
121.
The aim of this study was to prepare, characterize, and evaluate apigenin in a solid dispersion system to improve the dissolution rate and bioavailability of such poorly soluble drug. Apigenin was dissolved in organic solvent with micelle forming polymer Pluronic F-127 (PL-F127). Solid dispersion of apigenin-PL F-127 was developed using spray drying technique. Physicochemical and in vitro characterization of the produced solid dispersion particles were conducted using scanning electron microscopy, differential scanning calorimetry, Fourier transform infrared spectroscopy, powder X-ray diffractometry and dissolution study. In addition, in vivo study was performed for the spray dried versus pure and marketed apigenin. Cmax was found to be around 5 folds higher for spray dried product compared to non spray dried materials. The prepared drug:polymer formulation showed elongated particles and complete lack of crystallinity at 1:4 ratio. The change in the vibrational wave numbers strongly suggested the formation of hydrogen bonding between apigenin and PL F-127. Significant increase in the dissolution rate and bioavailability of the spray dried apigenin showed the potential of solid dispersion system to overcome problem related to BCS II drugs.  相似文献   
122.
Automated cyber security configuration synthesis is the holy grail of cyber risk management. The effectiveness of cyber security is highly dependent on the appropriate configuration hardening of heterogeneous, yet interdependent, network security devices, such as firewalls, intrusion detection systems, IPSec gateways, and proxies, to minimize cyber risk. However, determining cost-effective security configuration for risk mitigation is a complex decision-making process because it requires considering many different factors including end-hosts’ security weaknesses based on compliance checking, threat exposure due to network connectivity, potential impact/damage, service reachability requirements according to business polices, acceptable usability due to security hardness, and budgetary constraints. Although many automated techniques and tools have been proposed to scan end-host vulnerabilities and verify the policy compliance, existing approaches lack metrics and analytics to identify fine-grained network access control based on comprehensive risk analysis using both the hosts’ compliance reports and network connectivity. In this paper, we present new metrics and a formal framework for automatically assessing the global enterprise risk and determining the most cost-effective security configuration for risk mitigation considering both the end-host security compliance and network connectivity. Our proposed metrics measure the global enterprise risk based on the end-host vulnerabilities and configuration weaknesses, collected through compliance scanning reports, their inter-dependencies, and network reachability. We then use these metrics to automatically generate a set of host-based vulnerability fixes and network access control decisions that mitigates the global network risk to satisfy the desired Return on Investment of cyber security. We solve the problem of cyber risk mitigation based on advanced formal methods using Satisfiability Module Theories, which has shown scalability with large-size networks.  相似文献   
123.
Many optimization problems that involve practical applications have functional constraints,and some of these constraints are active,meaning that they prevent any solution from improving the objective function value to the one that is better than any solution lying beyond the constraint limits.Therefore,the optimal solution usually lies on the boundary of the feasible region.In order to converge faster when solving such problems,a new ranking and selection scheme is introduced which exploits this feature ...  相似文献   
124.
Torsional micromirrors emerged recently as an effective means of light manipulation. Their fast response, low wavelength sensitivity, and easy mass production have made them an attractive technology to implement optical switching and scanning applications. In this work, we developed a rigorous model of an electrically-actuated torsional micromirror. We verified the model against experimental data and conducted a convergence analysis to determine the minimum size of a reduced-order model (ROM) capable of representing the microscanner response accurately. We used the optimal ROM to study the dynamics of a microscanner. We found that the microscanner response exhibits a softening-type nonlinearity whose magnitude increases as the magnitude of the bias voltage increases. This nonlinearity results in multiple stable solutions at excitation frequencies close to but less than the natural frequency of the first mode. Operating the mirror in this region can cause abrupt jumps in the mirror response, thereby degrading the scanner performance. Furthermore, for a certain voltage range, we observed a two-to-one internal resonance between the first two modes. Due to this internal resonance, the mirror exhibits complex dynamic behavior, which degrades the microscanner’s performance. We formulated a simple design rule to avoid this problem.  相似文献   
125.

An essential element in the smart city vision is providing safe and secure journeys via intelligent vehicles and smart roads. Vehicular ad hoc networks (VANETs) have played a significant role in enhancing road safety where vehicles can share road information conditions. However, VANETs share the same security concerns of legacy ad hoc networks. Unlike exiting works, we consider, in this paper, detection a common attack where nodes modify safety message or drop them. Unfortunately, detecting such a type of intrusion is a challenging problem since some packets may be lost or dropped in normal VANET due to congestion without malicious action. To mitigate these concerns, this paper presents a novel scheme for minimizing the invalidity ratio of VANET packets transmissions. In order to detect unusual traffic, the proposed scheme combines evidences from current as well as past behaviour to evaluate the trustworthiness of both data and nodes. A new intrusion detection scheme is accomplished through a four phases, namely, rule-based security filter, Dempster–Shafer adder, node’s history database, and Bayesian learner. The suspicion level of each incoming data is determined based on the extent of its deviation from data reported from trustworthy nodes. Dempster–Shafer’s theory is used to combine multiple evidences and Bayesian learner is adopted to classify each event in VANET into well-behaved or misbehaving event. The proposed solution is validated through extensive simulations. The results confirm that the fusion of different evidences has a significant positive impact on the performance of the security scheme compared to other counterparts.

  相似文献   
126.

Orthogonal frequency division multiple access (OFDMA) is extensively utilized for the downlink of cellular systems such as long term evolution (LTE) and LTE advanced. In OFDMA cellular networks, orthogonal resource blocks can be used within each cell. However, the available resources are rare and so those resources have to be reused by adjacent cells in order to achieve high spectral efficiency. This leads to inter-cell interference (ICI). Thus, ICI coordination among neighboring cells is very important for the performance improvement of cellular systems. Fractional frequency reuse (FFR) has been widely adopted as an effective solution that improves the throughput performance of cell edge users. However, FFR does not account for the varying nature of the channel. Moreover, it exaggerates in caring about the cell edge users at the price of cell inner users. Therefore, effective frequency reuse approaches that consider the weak points of FFR should be considered. In this paper, we present an adaptive self-organizing frequency reuse approach that is based on dividing every cell into two regions, namely, cell-inner and cell-outer regions; and minimizing the total interference encountered by all users in every region. Unlike the traditional FFR schemes, the proposed approach adjusts itself to the varying nature of the wireless channel. Furthermore, we derive the optimal value of the inner radius at which the total throughput of the inner users of the home cell is as close as possible to the total throughput of its outer users. Simulation results show that the proposed adaptive approach has better total throughput of both home cell and all 19 cells than the counterparts of strict FFR, even when all cells are fully loaded, where other algorithms in the literature failed to outperform strict FFR. The improved throughput means that higher spectral efficiency can be achieved; i.e., the spectrum, which is the most precious resource in wireless communication, can be utilized efficiently. In addition, the proposed algorithm can provide significant power saving, that can reach 50% compared to strict FFR, while not penalizing the throughput performance.

  相似文献   
127.
In today’s operation, all usage records for billing, regardless of their source, and service type are put into a file/stream and delivered to the downstream revenue accounting office for processing. The revenue accounting office operates in a batch mode, then scans through the records and separates those which are required for special processing by other applications, like fraud management and customer access to network usage data. However, there is a significant delay between the time the usage records are generated to the time they are available to the other systems. This paper proposes solutions to support real-time transfer of automatic message accounting (AMA) records and files. First, the existing automatic message accounting teleprocessing system (AMATPS) architecture is analyzed to study its limitations. Next, transport mechanisms are identified and analyzed. Finally, an alternative to the existing AMATPS architecture is discussed.  相似文献   
128.
Accelerated life testing has been widely used in product life testing experiments because it can quickly provide information on the lifetime distributions by testing products or materials at higher than basic conditional levels of stress, such as pressure, temperature, vibration, voltage, or load to induce early failures. In this paper, a step stress partially accelerated life test (SS-PALT) is regarded under the progressive type-II censored data with random removals. The removals from the test are considered to have the binomial distribution. The life times of the testing items are assumed to follow length-biased weighted Lomax distribution. The maximum likelihood method is used for estimating the model parameters of length-biased weighted Lomax. The asymptotic confidence interval estimates of the model parameters are evaluated using the Fisher information matrix. The Bayesian estimators cannot be obtained in the explicit form, so the Markov chain Monte Carlo method is employed to address this problem, which ensures both obtaining the Bayesian estimates as well as constructing the credible interval of the involved parameters. The precision of the Bayesian estimates and the maximum likelihood estimates are compared by simulations. In addition, to compare the performance of the considered confidence intervals for different parameter values and sample sizes. The Bootstrap confidence intervals give more accurate results than the approximate confidence intervals since the lengths of the former are less than the lengths of latter, for different sample sizes, observed failures, and censoring schemes, in most cases. Also, the percentile Bootstrap confidence intervals give more accurate results than Bootstrap-t since the lengths of the former are less than the lengths of latter for different sample sizes, observed failures, and censoring schemes, in most cases. Further performance comparison is conducted by the experiments with real data.  相似文献   
129.
High-quality medical microscopic images used for diseases detection are expensive and difficult to store. Therefore, low-resolution images are favorable due to their low storage space and ease of sharing, where the images can be enlarged when needed using Super-Resolution (SR) techniques. However, it is important to maintain the shape and size of the medical images while enlarging them. One of the problems facing SR is that the performance of medical image diagnosis is very poor due to the deterioration of the reconstructed image resolution. Consequently, this paper suggests a multi-SR and classification framework based on Generative Adversarial Network (GAN) to generate high-resolution images with higher quality and finer details to reduce blurring. The proposed framework comprises five GAN models: Enhanced SR Generative Adversarial Networks (ESRGAN), Enhanced deep SR GAN (EDSRGAN), Sub-Pixel-GAN, SRGAN, and Efficient Wider Activation-B GAN (WDSR-b-GAN). To train the proposed models, we have employed images from the famous BreakHis dataset and enlarged them by 4× and 16× upscale factors with the ground truth of the size of 256 × 256 × 3. Moreover, several evaluation metrics like Peak Signal-to-Noise Ratio (PSNR), Mean Squared Error (MSE), Structural Similarity Index (SSIM), Multiscale Structural Similarity Index (MS-SSIM), and histogram are applied to make comprehensive and objective comparisons to determine the best methods in terms of efficiency, training time, and storage space. The obtained results reveal the superiority of the proposed models over traditional and benchmark models in terms of color and texture restoration and detection by achieving an accuracy of 99.7433%.  相似文献   
130.
One of the most common kinds of cancer is breast cancer. The early detection of it may help lower its overall rates of mortality. In this paper, we robustly propose a novel approach for detecting and classifying breast cancer regions in thermal images. The proposed approach starts with data preprocessing the input images and segmenting the significant regions of interest. In addition, to properly train the machine learning models, data augmentation is applied to increase the number of segmented regions using various scaling ratios. On the other hand, to extract the relevant features from the breast cancer cases, a set of deep neural networks (VGGNet, ResNet-50, AlexNet, and GoogLeNet) are employed. The resulting set of features is processed using the binary dipper throated algorithm to select the most effective features that can realize high classification accuracy. The selected features are used to train a neural network to finally classify the thermal images of breast cancer. To achieve accurate classification, the parameters of the employed neural network are optimized using the continuous dipper throated optimization algorithm. Experimental results show the effectiveness of the proposed approach in classifying the breast cancer cases when compared to other recent approaches in the literature. Moreover, several experiments were conducted to compare the performance of the proposed approach with the other approaches. The results of these experiments emphasized the superiority of the proposed approach.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号