The development of rechargeable lithium-ion batteries (LIBs) is being driven by the ever-increasing demand for high energy density and excellent rate performance. Charge transfer kinetics and polarization theory, considered as basic principles for charge regulation in the LIBs, indicate that the rapid transfer of both electrons and ions is vital to the electrochemical reaction process. Graphene, a promising candidate for charge regulation in high-performance LIBs, has received extensive investigations due to its excellent carrier mobility, large specific surface area and structure tunability, etc. Recent progresses on the structural design and interfacial modification of graphene to regulate the charge transport in LIBs have been summarized. Besides, the structure-performance relationships between the structure of the graphene and its dedicated applications for LIBs have also been clarified in detail. Taking graphene as a typical example to explore the mechanism of charge regulation will outline ways to further understand and improve carbon-based nanomaterials towards the next generation of electrochemical energy storage devices.
In this paper we solve the problem of determining the optimal inspection/disposition policy for a finite batch of items produced by a machine that is subject to random breakdowns. In particular, we identify which units should be inspected and in which order so as to minimize costs. The operational implications of the optimal policy are analyzed with a selected set of numerical results. We place special emphasis on three different policies: the cost minimizing policy; the policy of perfect information, i.e., we insist on determining the quality of each unit; and the policy of zero-defects,i.e., we insist that all accepted units are known to conform to specifications, allowing the rejection of units of unknown quality. We also show how the optimal inspection/disposition policy is incorporated into the optimization of the batch size. 相似文献
Believability has been proposed as a factor influencing the persuasiveness of narratives. A measure of narrative believability was developed and validated. Study 1 details the construction and evaluation of the Narrative Believability Scale (NBS‐12) in terms of internal consistency. Study 2 evaluates the criterion‐related and construct validity of the scale. Study 3 tests the predictive validity of the measure for identifying juror verdicts and verdict confidence over and above the influence of other measures, including presentation order, attorney credibility, bias, and transportation. The NBS‐12 was found to be a psychometrically robust measure of narrative believability and was able to predict variance in verdicts and verdict confidence. These results have implications for narrative persuasion research and understanding juror decision making. 相似文献
There is wide agreement that one of the most significant impediments to the performance of current and future pipelined superscalar processors is the presence of conditional branches in the instruction stream. Speculative execution is one solution to the branch problem, but speculative work is discarded if a branch is mispredicted. For it to be effective, speculative execution requires a very accurate branch predictor; 95% accuracy is not good enough. This paper proposes branch classification, a methodology for building more accurate branch predictors. Branch classification allows an individual branch instruction to be associated with the branch predictor best suited to predict its direction. Using this approach, a hybrid branch predictor can be constructed such that each component branch predictor predicts those branches for which it is best suited. To demonstrate the usefulness of branch classification, an example classification scheme is given and a new hybrid predictor is built based on this scheme which achieves a higher prediction accuracy than any branch predictor previously reported in the literature. 相似文献
Neural networks are a good way to interrelate nonlinear variables in a robust manner. The simplex method for optimization is not nearly as effectual, and neither are the various statistical methods for classifying and associating data and predicting results. The reason is that neural networks are put through a training phase, during which they can automatically fine-tune themselves as often as proves necessary to get the desired performance. Of course, the old adage “garbage in...garbage out” applies as much to neural networks as it does to all other data-processing applications. If the training data set (the collection of input data and its associated correct output data) is not thoughtfully chosen, the resulting network is unlikely to hold up well in an industrial environment. So it is hardly surprising that massaging the set of training data consumes some 80 percent of the engineering time spent getting a real-world neural network up and running-that is, getting it to converge under a broad enough range of conditions to be deployed with confidence in a production situation. If that data preparation is done systematically, much time can be saved and a more robust end-product can be obtained. A nine-step process is given that experience (the author's) has shown can enhance the probability of obtaining a learning convergence robust enough for industrial use 相似文献