首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Should we deploy social robots in care settings? This question, asked from a policy standpoint, requires that we understand the potential benefits and downsides of deploying social robots in care situations. Potential benefits could include increased efficiency, increased welfare, physiological and psychological benefits, and experienced satisfaction. There are, however, important objections to the use of social robots in care. These include the possibility that relations with robots can potentially displace human contact, that these relations could be harmful, that robot care is undignified and disrespectful, and that social robots are deceptive. I propose a framework for evaluating all these arguments in terms of three aspects of care: structure, process, and outcome. I then highlight the main ethical considerations that have to be made in order to untangle the web of pros and cons of social robots in care as these pros and cons are related the trade-offs regarding quantity and quality of care, process and outcome, and objective and subjective outcomes.  相似文献   

2.
制造业大数据顶层设计的内容和方法(上篇)   总被引:1,自引:0,他引:1  
针对我国目前制造业仅有设备远程监控大数据、电子商务大数据,根本不能满足制造业转型升级需要的现状,提出开展制造业大数据顶层设计的思路.进行制造业大数据清单分析,提出了大数据的建立、有序化、结构化和集成应用等内容,包括制度设计、标准设计、平台设计、关键技术攻关等.研究了基于大数据的创新模式、大批量定制模式、制造服务模式和绿色制造模式等,并提出了我国制造业大数据顶层设计的组织实施方法.  相似文献   

3.
司光耀  王凯  李文强  李彦  牟亮 《工程设计学报》2016,23(6):D27CDB6E-638
随着产品互联网时代的到来,针对传统产品设计需求的获取方法单一、实时性差、主观性过强等问题,提出了一种基于大数据分析和粗糙集理论的产品需求分析方法.采用网路爬虫技术,实时获取存在于多种信息载体中的大体量用户需求;利用大数据分析工具和粗糙集获取不同语义层次的用户需求类型的权重,并通过质量功能配置(quality function deployment,QFD)质量屋将用户需求转换为相应的工程设计参数,从而为准确获取反映用户需求的产品设计方向提供依据.以三星手机Galaxy S6Edge为例,通过对该款手机在微博和京东商城的产品用户评论等数据进行大数据分析,并结合粗糙集理论的用户需求计算方法,验证了提出方法的有效性.  相似文献   

4.
Energy sustainability is a complex problem that needs to be tackled holistically by equally addressing other aspects such as socio-economic to meet the strict CO2 emission targets. This paper builds upon our previous work on the effect of household transition on residential energy consumption where we developed a 3D urban energy prediction system (EvoEnergy) using the old UK panel data survey, namely, the British household panel data survey (BHPS). In particular, the aim of the present study is to examine the validity and reliability of EvoEnergy under the new UK household longitudinal study (UKHLS) launched in 2009. To achieve this aim, the household transition and energy prediction modules of EvoEnergy have been tested under both data sets using various statistical techniques such as Chow test. The analysis of the results advised that EvoEnergy remains a reliable prediction system and had a good prediction accuracy (MAPE  5%) when compared to actual energy performance certificate data. From this premise, we recommend researchers, who are working on data-driven energy consumption forecasting, to consider merging the BHPS and UKHLS data sets. This will, in turn, enable them to capture the bigger picture of different energy phenomena such as fuel poverty; consequently, anticipate problems with policy prior to their occurrence. Finally, the paper concludes by discussing two scenarios of EvoEnergy development in relation to energy policy and decision-making.  相似文献   

5.
In view of the problems such as frequent fluctuation of garlic price, lack of efficient forecasting means and difficulty in realizing the steady development of garlic industry, combined with the current situation of garlic industry and the collected data information. Taking Big Data platform of garlic industry chain as the core, using the methods of correlation analysis, smoothness test, co-integration test, and Granger causality test, this paper analyzes the correlation, dynamic, and causality between garlic price and young garlic shoot price. According to the current situation of garlic industry, the garlic industry service based on Big Data is put forward. It is concluded that there is a positive correlation between garlic price and young garlic shoot price, and there is a long-term stable dynamic equilibrium relationship between young garlic shoot price and garlic price fluctuation, and young garlic shoot price can affect garlic price. Finally, it is proposed to strengthen the infrastructure construction of garlic Big Data, increase the technological innovation and application of garlic Big Data technology, and promote the safety and security ability of the whole industry to promote the development of garlic industry.  相似文献   

6.
In view of the frequent fluctuation of garlic price under the market economy and the current situation of garlic price, the fluctuation of garlic price in the circulation link of garlic industry chain is analyzed, and the application mode of multidisciplinary in the agricultural industry is discussed. On the basis of the big data platform of garlic industry chain, this paper constructs a Garch model to analyze the fluctuation law of garlic price in the circulation link and provides the garlic industry service from the angle of price fluctuation combined with the economic analysis. The research shows that the average price rate of the price of garlic shows “agglomeration” and cyclical phenomenon, which has the characteristics of fragility, left and a non-normal distribution and the fitting value of the GARCH model is very close to the true value. Finally, it looks into the industrial service form from the perspective of garlic price fluctuation.  相似文献   

7.
This study aims to investigate the contributions of online promotional marketing and online reviews as predictors of consumer product demands. Using electronic data from Amazon.com, we attempt to predict if online review variables such as valence and volume of reviews, the number of positive and negative reviews, and online promotional marketing variables such as discounts and free deliveries, can influence the demand of electronic products in Amazon.com. A Big Data architecture was developed and Node.JS agents were deployed for scraping the Amazon.com pages using asynchronous Input/Output calls. The completed Web crawling and scraping data-sets were then preprocessed for Neural Network analysis. Our results showed that variables from both online reviews and promotional marketing strategies are important predictors of product demands. Variables in online reviews in general were better predictors as compared to online marketing promotional variables. This study provides important implications for practitioners as they can better understand how online reviews and online promotional marketing can influence product demands. Our empirical contributions include the design of a Big Data architecture that incorporate Neural Network analysis which can used as a platform for future researchers to investigate how Big Data can be used to understand and predict online consumer product demands.  相似文献   

8.
Big Data is reforming many industrial domains by providing decision support through analyzing large data volumes. Big Data testing aims to ensure that Big Data systems run smoothly and error-free while maintaining the performance and quality of data. However, because of the diversity and complexity of data, testing Big Data is challenging. Though numerous research efforts deal with Big Data testing, a comprehensive review to address testing techniques and challenges of Big Data is not available as yet. Therefore, we have systematically reviewed the Big Data testing techniques’ evidence occurring in the period 2010–2021. This paper discusses testing data processing by highlighting the techniques used in every processing phase. Furthermore, we discuss the challenges and future directions. Our findings show that diverse functional, non-functional and combined (functional and non-functional) testing techniques have been used to solve specific problems related to Big Data. At the same time, most of the testing challenges have been faced during the MapReduce validation phase. In addition, the combinatorial testing technique is one of the most applied techniques in combination with other techniques (i.e., random testing, mutation testing, input space partitioning and equivalence testing) to find various functional faults through Big Data testing.  相似文献   

9.
The networks are fundamental to our modern world and they appear throughout science and society. Access to a massive amount of data presents a unique opportunity to the researcher’s community. As networks grow in size the complexity increases and our ability to analyze them using the current state of the art is at severe risk of failing to keep pace. Therefore, this paper initiates a discussion on graph signal processing for large-scale data analysis. We first provide a comprehensive overview of core ideas in Graph signal processing (GSP) and their connection to conventional digital signal processing (DSP). We then summarize recent developments in developing basic GSP tools, including methods for graph filtering or graph learning, graph signal, graph Fourier transform (GFT), spectrum, graph frequency, etc. Graph filtering is a basic task that allows for isolating the contribution of individual frequencies and therefore enables the removal of noise. We then consider a graph filter as a model that helps to extend the application of GSP methods to large datasets. To show the suitability and the effeteness, we first created a noisy graph signal and then applied it to the filter. After several rounds of simulation results. We see that the filtered signal appears to be smoother and is closer to the original noise-free distance-based signal. By using this example application, we thoroughly demonstrated that graph filtration is efficient for big data analytics.  相似文献   

10.
In the security and privacy fields, Access Control (AC) systems are viewed as the fundamental aspects of networking security mechanisms. Enforcing AC becomes even more challenging when researchers and data analysts have to analyze complex and distributed Big Data (BD) processing cluster frameworks, which are adopted to manage yottabyte of unstructured sensitive data. For instance, Big Data systems’ privacy and security restrictions are most likely to failure due to the malformed AC policy configurations. Furthermore, BD systems were initially developed toped to take care of some of the DB issues to address BD challenges and many of these dealt with the “three Vs” (Velocity, Volume, and Variety) attributes, without planning security consideration, which are considered to be patch work. Some of the BD “three Vs” characteristics, such as distributed computing, fragment, redundant data and node-to node communication, each with its own security challenges, complicate even more the applicability of AC in BD.
This paper gives an overview of the latest security and privacy challenges in BD AC systems. Furthermore, it analyzes and compares some of the latest AC research frameworks to reduce privacy and security issues in distributed BD systems, which very few enforce AC in a cost-effective and in a timely manner. Moreover, this work discusses some of the future research methodologies and improvements for BD AC systems. This study is valuable asset for Artificial Intelligence (AI) researchers, DB developers and DB analysts who need the latest AC security and privacy research perspective before using and/or improving a current BD AC framework.  相似文献   

11.
In recent years, the rapid development of big data technology has also been favored by more and more scholars. Massive data storage and calculation problems have also been solved. At the same time, outlier detection problems in mass data have also come along with it. Therefore, more research work has been devoted to the problem of outlier detection in big data. However, the existing available methods have high computation time, the improved algorithm of outlier detection is presented, which has higher performance to detect outlier. In this paper, an improved algorithm is proposed. The SMK-means is a fusion algorithm which is achieved by Mini Batch K-means based on simulated annealing algorithm for anomalous detection of massive household electricity data, which can give the number of clusters and reduce the number of iterations and improve the accuracy of clustering. In this paper, several experiments are performed to compare and analyze multiple performances of the algorithm. Through analysis, we know that the proposed algorithm is superior to the existing algorithms  相似文献   

12.
13.
In the big data environment, enterprises must constantly assimilate big data knowledge and private knowledge by multiple knowledge transfers to maintain their competitive advantage. The optimal time of knowledge transfer is one of the most important aspects to improve knowledge transfer efficiency. Based on the analysis of the complex characteristics of knowledge transfer in the big data environment, multiple knowledge transfers can be divided into two categories. One is the simultaneous transfer of various types of knowledge, and the other one is multiple knowledge transfers at different time points. Taking into consideration the influential factors, such as the knowledge type, knowledge structure, knowledge absorptive capacity, knowledge update rate, discount rate, market share, profit contributions of each type of knowledge, transfer costs, product life cycle and so on, time optimization models of multiple knowledge transfers in the big data environment are presented by maximizing the total discounted expected profits (DEPs) of an enterprise. Some simulation experiments have been performed to verify the validity of the models, and the models can help enterprises determine the optimal time of multiple knowledge transfer in the big data environment.  相似文献   

14.
Recently, there are some online quantile algorithms that work on how to analyze the order statistics about the high-volume and high-velocity data stream, but the drawback of these algorithms is not scalable because they take the GK algorithm as the subroutine, which is not known to be mergeable. Another drawback is that they can’t maintain the correctness, which means the error will increase during the process of the window sliding. In this paper, we use a novel data structure to store the sketch that maintains the order statistics over sliding windows. Therefore three algorithms have been proposed based on the data structure. And the fixed-size window algorithm can keep the sketch of the last W elements. It is also scalable because of the mergeable property. The time-based window algorithm can always keep the sketch of the data in the last T time units. Finally, we provide the window aggregation algorithm which can help extend our algorithm into the distributed system. This provides a speed performance boost and makes it more suitable for modern applications such as system/network monitoring and anomaly detection. The experimental results show that our algorithm can not only achieve acceptable performance but also can actually maintain the correctness and be mergeable.  相似文献   

15.
《工程(英文)》2019,5(6):1010-1016
Safe, efficient, and sustainable operations and control are primary objectives in industrial manufacturing processes. State-of-the-art technologies heavily rely on human intervention, thereby showing apparent limitations in practice. The burgeoning era of big data is influencing the process industries tremendously, providing unprecedented opportunities to achieve smart manufacturing. This kind of manufacturing requires machines to not only be capable of relieving humans from intensive physical work, but also be effective in taking on intellectual labor and even producing innovations on their own. To attain this goal, data analytics and machine learning are indispensable. In this paper, we review recent advances in data analytics and machine learning applied to the monitoring, control, and optimization of industrial processes, paying particular attention to the interpretability and functionality of machine learning models. By analyzing the gap between practical requirements and the current research status, promising future research directions are identified.  相似文献   

16.
The rapid growth of user interactions in social media sites gives useful insights in many areas. Facebook is the most popular social media site lately, with the highest number of active users, which is a valuable and hassle-free source in obtaining data. Despite its enthusiastic nature, it is a mere fact that people use Facebook to gain instant updates on the current state of affairs. The ability of getting updates from several sources of news channels in a single user news feed, the extreme ease of providing feedbacks on those news posts using gesture-based reactions, send and share messages among people are some of the main reasons for its increasing popularity in the perspective of attaining news. Politics has always been a ubiquitous topic in the world. Sri Lanka was in a war on terrorism for nearly three decades, followed by a governance (2005–2015) led by the same political party which was alleged for autocracy and lasted for nearly a decade has influenced the citizens’ political conviction heavily. On such a background, the “Good-Governance” (2015–2019) which is a coalition government, trounced the ruling government at the presidential election held by then, which they claimed to direct Sri Lanka towards a sustainable, stable, responsible and moral society with necessary constitutional amendments guaranteeing democracy to all ethnic groups eradicating corruption, wastage and fraud. The interest and motivation of this study builds up to discover whether there are any significant trends in the Sri Lankan political context following this transformation, in the perspective of the general public. Facebook user reactions on news posts have been used for this study as the data source. The analysis of this study reveals an increasing trend of user reactions in politics from 2011 to 2018. Further, it is identified that the present government (2015–2019) has a decreasing trend of user reactions over the past years (2011–2015) in the sight of its citizens, although they pledged for a better governance. On the contrary, the previous government has an increasing trend even though they were overpowered by the “Good-Governance” for its alleged unscrupulous ruling.  相似文献   

17.
《工程(英文)》2017,3(1):66-70
Method development has always been and will continue to be a core driving force of microbiome science. In this perspective, we argue that in the next decade, method development in microbiome analysis will be driven by three key changes in both ways of thinking and technological platforms: ① a shift from dissecting microbiota structure by sequencing to tracking microbiota state, function, and intercellular interaction via imaging; ② a shift from interrogating a consortium or population of cells to probing individual cells; and ③ a shift from microbiome data analysis to microbiome data science. Some of the recent method-development efforts by Chinese microbiome scientists and their international collaborators that underlie these technological trends are highlighted here. It is our belief that the China Microbiome Initiative has the opportunity to deliver outstanding “Made-in-China” tools to the international research community, by building an ambitious, competitive, and collaborative program at the forefront of method development for microbiome science.  相似文献   

18.
The majority of big data analytics applied to transportation datasets suffer from being too domain-specific, that is, they draw conclusions for a dataset based on analytics on the same dataset. This makes models trained from one domain (e.g. taxi data) applies badly to a different domain (e.g. Uber data). To achieve accurate analyses on a new domain, substantial amounts of data must be available, which limits practical applications. To remedy this, we propose to use semi-supervised and active learning of big data to accomplish the domain adaptation task: Selectively choosing a small amount of datapoints from a new domain while achieving comparable performances to using all the datapoints. We choose the New York City (NYC) transportation data of taxi and Uber as our dataset, simulating different domains with 90% as the source data domain for training and the remaining 10% as the target data domain for evaluation. We propose semi-supervised and active learning strategies and apply it to the source domain for selecting datapoints. Experimental results show that our adaptation achieves a comparable performance of using all datapoints while using only a fraction of them, substantially reducing the amount of data required. Our approach has two major advantages: It can make accurate analytics and predictions when big datasets are not available, and even if big datasets are available, our approach chooses the most informative datapoints out of the dataset, making the process much more efficient without having to process huge amounts of data.  相似文献   

19.
Product innovation is regarded as a primary means for enterprises to maintain their competitive advantage. Knowledge transfer is a major way that enterprises access knowledge from the external environment for new product innovation. Knowledge transfer may face the risk of infringement of the intellectual property rights of other enterprises and the termination of licensing agreements by the knowledge source. Enterprises must develop independent innovation knowledge at the same time they profit from knowledge transfers. Therefore, new product development by an enterprise usually consists of three types of new knowledge: big data knowledge transferred from big data knowledge providers, private knowledge transferred from other enterprises, and new knowledge developed independently by an enterprise in the big data environment. To find what the influences of different types of knowledge are on new product development (NPD) performance, a model is presented that maximizes the expected NPD performance. The results show that the greater the weight of independent innovation knowledge, the greater the performance of NPD. Enterprises tend to transfer knowledge from the external environment when the research and development (R&D) investment is much higher, and enterprises will speed up independent innovation when independent innovation knowledge is expected to bring a larger market share. The model can help enterprises to determine knowledge composition, the scale of R&D investment and predict the performance of NPD.  相似文献   

20.
With the rapid development of mobile Internet and finance technology, online e-commerce transactions have been increasing and expanding very fast, which globally brings a lot of convenience and availability to our life, but meanwhile, chances of committing frauds also come in all shapes and sizes. Moreover, fraud detection in online e-commerce transactions is not totally the same to that in the existing areas due to the massive amounts of data generated in e-commerce, which makes the fraudulent transactions more covertly scattered with genuine transactions than before. In this article, a novel scalable and comprehensive approach for fraud detection in online e-commerce transactions is proposed with majorly four logical modules, which uses big data analytics and machine learning algorithms to parallelize the processing of the data from a Chinese e-commerce company. Groups of experimental results show that the approach is more accurate and efficient to detect frauds in online e-commerce transactions and scalable for big data processing to obtain real-time property.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号