首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 10 毫秒
1.
Different pig size distributions in fattening units can prevent a pig procurement plan from achieving optimal results. Plans that fail to consider the pig size distribution and pig growth are not likely to be able to cost-effectively satisfy demand for each pig size. This paper demonstrates the use of a heuristic algorithm, pig size distribution, and pig growth to create a procurement plan. The performance of the developed procurement method is compared to the traditional practices of a company studied here. The results indicate that the company is likely to save approximately 9.52% of procurement costs by adopting the proposed method. The same problems were also investigated at an industrially relevant scale, and the computational time of the proposed heuristic was found to be reasonable. Thus, the pig industry is likely to benefit from the method developed here.  相似文献   

2.
胡开宝  张毅坤  赵明 《计算机应用》2013,33(4):1136-1138
针对常规层次型布图算法在大规模程序中布线混乱的缺点,借鉴Sugiyama层次布局算法,提出了一种随着程序规模动态调整的通道优化布线算法。通过将节点的通道数目与程序规模建立函数关系,以解决现有算法在布图时出现的线路重叠和效率低下的问题;在布图中结合广义张量平衡思想,以减少交叉并实现布图的美观性;并根据调用节点之间的相对位置关系,给出了相应的线路分配和申请策略,实现了布线的有序性。实践证明,该算法能够提高布图效率,有效地减少交叉,实现节点的有序布线和实现简单等优点。  相似文献   

3.
A method to control load distribution in the closed exponential queuing networks with one class of customers was proposed. It is based on simultaneous use of routing control and control of servicing intensities. Consideration was given to an evolution model of the queuing network with control of load distribution and an approximate method of analysis of the queuing networks of this type. A technique to calculate the stationary distribution and formulas to calculate other stationary characteristics of the queuing networks with control of load distribution were described. Examples of analysis of the queuing networks of this type were presented.  相似文献   

4.
Mining high utility itemsets is one of the most important research issues in data mining owing to its ability to consider nonbinary frequency values of items in transactions and different profit values for each item. Mining such itemsets from a transaction database involves finding those itemsets with utility above a user-specified threshold. In this paper, we propose an efficient concurrent algorithm, called CHUI-Mine (Concurrent High Utility Itemsets Mine), for mining high utility itemsets by dynamically pruning the tree structure. A tree structure, called the CHUI-Tree, is introduced to capture the important utility information of the candidate itemsets. By recording changes in support counts of candidate high utility items during the tree construction process, we implement dynamic CHUI-Tree pruning, and discuss the rationality thereof. The CHUI-Mine algorithm makes use of a concurrent strategy, enabling the simultaneous construction of a CHUI-Tree and the discovery of high utility itemsets. Our algorithm reduces the problem of huge memory usage for tree construction and traversal in tree-based algorithms for mining high utility itemsets. Extensive experimental results show that the CHUI-Mine algorithm is both efficient and scalable.  相似文献   

5.
Classifying traffic into specific network applications is essential for application-aware network management and it becomes more challenging because modern applications complicate their network behaviors. While port number-based classifiers work only for some well-known applications and signature-based classifiers are not applicable to encrypted packet payloads, researchers tend to classify network traffic based on behaviors observed in network applications. In this paper, a session level flow classification (SLFC) approach is proposed to classify network flows as a session, which comprises of flows in the same conversation. SLFC first classifies flows into the corresponding applications by packet size distribution (PSD) and then groups flows as sessions by port locality. With PSD, each flow is transformed into a set of points in a two-dimension space and the distances between each flow and the representatives of pre-selected applications are computed. The flow is recognized as the application having a minimum distance. Meanwhile, port locality is used to group flows as sessions because an application often uses consecutive port numbers within a session. If flows of a session are classified into different applications, an arbitration algorithm is invoked to make the correction. The evaluation shows that SLFC achieves high accuracy rates on both flow and session classifications, say 99.9% and 99.98%, respectively. When SLFC is applied to online classification, it is able to make decisions quickly by checking at most 300 packets for long-lasting flows. Based on our test data, an average of 72% of packets in long-lasting flows can be skipped without reducing the classification accuracy rates.  相似文献   

6.
Appropriate comments of code snippets provide insight for code functionality, which are helpful for program comprehension. However, due to the great cost of authoring with the comments, many code projects do not contain adequate comments. Automatic comment generation techniques have been proposed to generate comments from pieces of code in order to alleviate the human efforts in annotating the code. Most existing approaches attempt to exploit certain correlations (usually manually given) between code and generated comments, which could be easily violated if coding patterns change and hence the performance of comment generation declines. In addition, recent approaches ignore exploiting the code constructs and leveraging the code snippets like plain text. Furthermore, previous datasets are also too small to validate the methods and show their advantage. In this paper, we propose a new attention mechanism called CodeAttention to translate code to comments, which is able to utilize the code constructs, such as critical statements, symbols and keywords. By focusing on these specific points, CodeAttention could understand the semantic meanings of code better than previous methods. To verify our approach in wider coding patterns, we build a large dataset from open projects in GitHub. Experimental results in this large dataset demonstrate that the proposed method has better performance over existing approaches in both objective and subjective evaluation. We also perform ablation studies to determine effects of different parts in CodeAttention.  相似文献   

7.
Let G=(V,E) be an undirected graph and C a subset of vertices. If the sets Br(v)∩C, vV (respectively, vVC), are all nonempty and different, where Br(v) denotes the set of all points within distance r from v, we call C an r-identifying code (respectively, an r-locating-dominating code). We prove that, given a graph G and an integer k, the decision problem of the existence of an r-identifying code, or of an r-locating-dominating code, of size at most k in G, is NP-complete for any r.  相似文献   

8.
Microdisc arrays are considered for which the constituent disc electrodes are nonuniform in size. A method is described for using linear sweep voltammetry to calibrate the array to find the mean and standard deviation of the microdisc radii. Numerical simulation is used to model linear sweep voltammetry at arrays of single-sized microdiscs and of nonuniformly sized microdiscs.  相似文献   

9.
The hyperbolic distribution is fitted to published grain-size data collected from the surface of two gravel-bed rivers (the Fraser and Mamquam River data sets of Rice and Church). Our parametric approach enables calculation of standard errors for estimates of percentiles, as an alternative to the use of the bootstrap for this purpose. For estimation, we have used the statistical package R and the advantages of this software for this type of analysis are highlighted in this paper.  相似文献   

10.
We have studied the canonical genetic code optimality by means of simulated evolution. A genetic algorithm is used to search for better adapted hypothetical codes and as a method to guess the difficulty in finding such alternative codes. Such analysis is performed within the coevolution theory of the genetic code organization. We have studied the progression of the canonical genetic code optimality within such theory, considering a possible scenario of a previous code with two-letter codons as well as the current organization of the canonical code. Moreover, we have analysed the particular optimality and progression of adaptability of the individual nucleotide bases.  相似文献   

11.
This work addresses the computation of surfactant feed profiles for shaping a particle size distribution to a bimodal target distribution. A fundamental population balance model of styrene semibatch emulsion polymerization is used in this control study. Both surfactant feed rate and free surfactant concentration are considered as alternative control variables and a comparison is made of the two approaches. A comparison is also made of several different objective function norms in the optimization. Results suggest that a min–max norm, tied to the distribution modes, is the most appropriate metric.  相似文献   

12.
The inverse problems of reconstructing the erythrocyte size distribution when the laser diffractometry data is given for the two erythrocyte geometric models—the flat and biconcave disks—are analyzed. It has turned out that when using each of the models the Tikhonov regularization method taking into account a priori information about the smoothness, finiteness, and the nonnegativity of the solution leads to a correct reconstruction of the unknown size distributions for the cases of normal blood, microcytoses, and macrocytoses, characterized by the presence of the factions’ abnormally small and abnormally large cells. In the case when the inverse problem is solved on the assumption of a flat particle shape, and the diffraction pattern is calculated by the biconcave disk model, the error in the determination of the first three statistical moments are directly proportional to the magnitude of the deepening in the form of a biconcave disk that simulates erythrocytes. In this case the solution qualitatively coincides with the true distribution, but is shifted relatively to it along the horizontal axis, which in principle can be compensated on the basis of a priori information about the average value of the erythrocyte size distribution.  相似文献   

13.
After crossing the midline, different populations of commissural axons in Drosophila target specific longitudinal pathways at different distances from the midline. It has recently been shown that this choice of lateral position is governed by the particular combination of Robo receptors expressed by these axons, presumably in response to a gradient of Slit released by the midline. Here we propose a simple theoretical model of this combinatorial coding scheme. The principal results of the model are that purely quantitative rather than qualitative differences between the different Robo receptors are sufficient to account for the effects observed following removal or ectopic expression of specific Robo receptors, and that the steepness of the Slit gradient in vivo must exceed a certain minimum for the results observed experimentally to be consistent.  相似文献   

14.
本文全面探讨了Visual FoxPro中对声音文件处理的各种方法,重点阐述了利用MCI编程接口控制声音文件的技巧,在理论阐述基础上给出了一个实用的声音文件处理例。  相似文献   

15.
李明春  静宇 《传感技术学报》2012,25(9):1189-1193
采用一种可双峰分布的概率密度函数构建孔径分布模型,并与实验测量结果进行对比验证。建立了包含孔结构影响的气体扩散-反应数学模型,数值研究了多孔气体传感器内孔径分布形态对响应时间、灵敏度和选择性的影响规律。计算结果表明,双峰分布孔结构对提高传感器的气敏性能具有一定实际意义。孔结构变化时,孔内扩散阻力和比表面积呈相反变化趋势,共同影响传感器内被测气体的表面反应速率。多孔气体传感器对被测气体的80%响应时间随副峰概率的增大而逐渐缩短。多孔气体传感器的相对灵敏度则随副峰概率呈极值性变化,具有极大值。  相似文献   

16.
Atmospheric ion mobility spectra were measured at Maitri, Antarctica, using an indigenously fabricated ion mobility spectrometer in January–February 2005 during the 24th Indian Antarctic Expedition. The ion mobility spectrometer was fabricated and tested at the Indian Institute of Tropical Meteorology, Pune, India. As the ion mobility depends on the diameter of the particles, the aerosol size distribution was derived from the observed ion mobility spectra using the KL model. The model was tested by comparing the derived spectra with the observed spectra using the Scanning Mobility Particle Sizer and the Aerodynamic Particle Sizer. We show that the KL model can reasonably reproduce the observed size distribution except in the accumulation mode. Relevant meteorological parameters are also reported, which aid in the interpretation of the results.  相似文献   

17.
The specificity of the sets of elements generated by the genetic code is considered. The sets are calculated for unusual ways of recording genetic information on overlapping genes, when one and the same DNA segment encodes two protein sequences. The concept of elementary overlapping is introduced, which is overlapping for individual amino acids. The mathematical ambiguity among the components of the set of elementary overlappings is established. One of the ambiguity functions is investigated in a new model proposed earlier by the author. Its states that overlappings of pairs of genes, belonging to different DNA chains, are mathematical analogs of the stems of the matrix RNA’s secondary structure. It is shown that, due to ambiguities, it is possible to regulate the value of free energy of the stem, which is a functionally significant biochemical characteristic.  相似文献   

18.
The subject of Gray codes algorithms for the set partitions of {1,2,…,n} had been covered in several works. The first Gray code for that set was introduced by Knuth (1975) [5], later, Ruskey presented a modified version of Knuth?s algorithm with distance two, Ehrlich (1973) [3] introduced a loop-free algorithm for the set of partitions of {1,2,…,n}, Ruskey and Savage (1994) [9] generalized Ehrlich?s results and give two Gray codes for the set of partitions of {1,2,…,n}, and recently, Mansour et al. (2008) [7] gave another Gray code and loop-free generating algorithm for that set by adopting plane tree techniques.In this paper, we introduce the set of e-restricted growth functions (a generalization of restricted growth functions) and extend the aforementioned results by giving a Gray code with distance one for this set; and as a particular case we obtain a new Gray code for set partitions in restricted growth function representation. Our Gray code satisfies some prefix properties and can be implemented by a loop-free generating algorithm using classical techniques; such algorithms can be used as a practical solution of some difficult problems. Finally, we give some enumerative results concerning the restricted growth functions of order d.  相似文献   

19.
We explore the growth of vertically aligned carbon nanofibers by plasma enhanced chemical vapor deposition, using lithographically defined Ni catalyst seeds on TiN. TiN is selected for being an electrically conducting diffusion barrier suitable for the realization of electronic devices. We show that the rate of Ni diffusion correlates to both the level of oxygen content in the TiN film and to the film resistivity. The synthesis of the nanofibers was characterized using electron microscopy with an emphasis on three growth parameters: substrate temperature, plasma power, and chamber pressure. We propose that a catalyst surface free from carbon deposits throughout the process will induce diffusion-limited growth. The growth will shift towards a supply-limited process when the balance between acetylene, as the effective carbon bearing gas, and atomic hydrogen, as the main etching agent, is skewed in favor of acetylene. This determines whether the dominating growth mode will be vertically aligned ‘tip-type’ or disordered ‘base-type’, by affecting the competition between the formation of the first graphitic sheets on the catalyst surface and at the catalyst-substrate interface.  相似文献   

20.
This paper describes the development of a query compiler for the PostgreSQL DBMS based on automatic code specialization methods; these methods allow one to avoid the development and support difficulties typical for classical query compilers by dividing the compiler development problem into two independent subproblems: reduction of overhead costs and implementation of algorithmic improvements. We assert that this decomposition facilitates the solution of both the subproblems: the cost reduction can be automated, while the algorithmic improvements can be implemented in the interpreter in the DBMS implementation language. This paper presents methods for online and offline specialization, considers specifics of specialization and binding-time analysis of the PostgreSQL source code, and describes the transition to a push model of execution.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号