Big-data research studies relying upon Deep-learning methods are revitalized the decision-making mechanism in the business sectors and the enterprise domains. The firms’ operational parameters also have the dependency of the Big-data analytics phase, their way of managing the data, and to evolve the outcomes of Big-data implementation by using the Deep-learning algorithms. Deep-learning approaches enhancements in Big-data applications facilitate the decision-making process such as the information-processing to the employees, analytical potentials augmentation, and in the transition of more innovative work. In this DL-approach, the robust-patterns of the data-predictions resulted from the unstructured information by conceptualizing the Decision-making methods. Hence this paper reviewed the impact of the Deep-learning process utilizing the Big-data in the enterprise and Business sectors. Also this study provides a comprehensive survey of all the Deep-learning techniques illustrating the efficiency of Big-Data processing and their impacts of operational parameters. Further it concentrating the data-dimensionality factors and the Big-data complications rectifying by utilizing the DL-algorithms, usage of Machine-learning or deep-learning process for the decision-making mechanism in the Enterprise sectors and business sectors. This research discussed the predictions of the Big-data analytics resulting to the decision parameters within the organisations, and in the management of larger scale of datasets in Big-data analytics processing by utilizing the Deep-learning implementations. The comparative analysis of the reviewed studies has also been described by comparing existing approaches of Deep-learning methodologies in employing Big-data analytics.
Scientometrics - Bibliometric analyses of systematic reviews offer unique opportunities to explore the character of specific scientific fields. In this time series-based analysis, dynamics of... 相似文献
In empirical studies selection of the order of a model is routinely invoked. A common example is the order selection of an autoregressive model via Akaike's AIC, Schwarz's BIC or Hannan and Quinn's HIC. The criteria are based on the conditional sum of squares (CSS). However, the computation of the CSS might be difficult for some models such as Bloomfield's exponential model and/or when we allow for long memory dependence. The main aim of the article is thus to propose an alternative way to compute the criterion by using the decomposition of the variance of the innovation errors in terms of its frequency components. We show its validity to obtain the correct order the model. In addition, as a by‐product, we describe a simple (two‐step) estimator of the parameters of the model. 相似文献
A charge‐carrier density dependent mobility has been predicted for amorphous, glassy energetically disordered semiconducting polymers, which would have considerable impact on their performance in devices. However, previous observations of a density dependent mobility are complicated by the polycrystalline materials studied. Here charge transport in field‐effect transistors and diodes of two amorphous, glassy fluorene‐triarylamine copolymers is investigated, and the results explored in terms of a charge‐carrier density dependent mobility model. The nondispersive nature of the time‐of‐flight (TOF) transients and analysis of dark injection transient results and transistor transfer characteristics indicate a charge‐carrier density independent mobility in both the low‐density diode and the high‐density transistor regimes. The mobility values for optimized transistors are in good agreement with the TOF values at the same field, and both have the same temperature dependency. The measured transistor mobility falls two to three orders of magnitude below that predicted from the charge‐carrier density dependent model, and does not follow the expected power‐law relationship. The experimental results for these two amorphous polymers are therefore consistent with a charge‐carrier density independent mobility, and this is discussed in terms of polaron‐dominated hopping and interchain correlated disorder. 相似文献
The process capability index (PCI) is a quality control–related statistic mostly used in the manufacturing industry, which is used to assess the capability of some monitored process. It is of great significance to quality control engineers as it quantifies the relation between the actual performance of the process and the preset specifications of the product. Most of the traditional PCIs performed well when process follows the normal behaviour. However, using these traditional indices to evaluate a non‐normally distributed process often leads to inaccurate results. In this article, we consider a new PCI, Cpy, suggested by Maiti et al, which can be used for normal as well as non‐normal random variables. This article addresses the different methods of estimation of the PCI Cpy from both frequentist and Bayesian view points of generalized Lindley distribution suggested by Nadarajah et al. We briefly describe different frequentist approaches, namely, maximum likelihood estimators, least square and weighted least square estimators, and maximum product of spacings estimators. Next, we consider Bayes estimation under squared error loss function using gamma priors for both shape and scale parameters for the considered model. We use Tierney and Kadane's method as well as Markov Chain Monte Carlo procedure to compute approximate Bayes estimates. Besides, two parametric bootstrap confidence intervals using frequentist approaches are provided to compare with highest posterior density credible intervals. Furthermore, Monte Carlo simulation study has been carried out to compare the performances of the classical and the Bayes estimates of Cpy in terms of mean squared errors along with the average width and coverage probabilities. Finally, two real data sets have been analysed for illustrative purposes. 相似文献
This paper proposes a colour image encryption scheme to encrypt colour images of arbitrary sizes. In this scheme, a fixed block size (3 × 8) based block-level diffusion operation is performed to encrypt arbitrary sized images. The proposed technique overcomes the limitation of performing block-level diffusion operations in arbitrary sized images. This method first performs bit-plane decomposition and concatenation operation on the three components (blue, green, and red) of the colour image. Second it performs row and column shuffling operation using the Logistic-Sine System. Then the proposed scheme executes block division and fixed block-level diffusion (exclusive-OR) operation using the key image generated by the Piece-wise Linear Chaotic Map. At last, the cipher image is generated by combining the diffused blocks. In addition, the SHA-256 hashing on plain image is used to make chaotic sequences unique in each encryption process and to protect the ciphertext against the known-plaintext attack and the chosen-plaintext attack. Simulation results and various parameter analysis demonstrate the algorithm’s best performance in image encryption and various common attacks.
Multimedia Tools and Applications - Initial work on image phylogeny used different approaches like the minimum spanning tree etc. The less investigated attempt is a bioinformatics-inspired approach... 相似文献
This paper presents a new synthesis procedure for nonrational driving-point functions by defining and using the operator. The operator is defined, and its properties are explored. Applying the Stieltjes transform on the operator, the Padé approximant or the continued fraction form of the nonrational network function can be achieved with reduced computational complexity. Thus, the classical Foster and Cauer form or other techniques may be applied to synthesize network functions. The application of this work is demonstrated by considering certain functions such as the square root, inverse tangent, logarithm, and Lambert's W function. A set of conditions called synthesis criteria is proposed, which should be satisfied by a nonrational function to be realizable. 相似文献