首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 171 毫秒
1.
While organisations are increasingly interested in artificial intelligence (AI), many AI projects encounter significant issues or even fail. To gain a deeper understanding of the issues that arise during these projects and the practices that contribute to addressing them, we study the case of Consult, a North American AI consulting firm that helps organisations leverage the power of AI by providing custom solutions. The management of AI projects at Consult is a multi-method approach that draws on elements from traditional project management, agile practices, and AI workflow practices. While the combination of these elements enables Consult to be effective in delivering AI projects to their customers, our analysis reveals that managing AI projects in this way draw upon three core logics, that is, commonly shared norms, values, and prescribed behaviours which influence actors' understanding of how work should be done. We identify that the simultaneous presence of these three logics—a traditional project management logic, an agile logic, and an AI workflow logic—gives rise to conflicts and issues in managing AI projects at Consult, and successfully managing these AI projects involves resolving conflicts that arise between them. From our case findings, we derive four strategies to help organisations better manage their AI projects.  相似文献   

2.
3.
Searle's Chinese Box: Debunking the Chinese Room Argument   总被引:4,自引:2,他引:2  
Hauser  Larry 《Minds and Machines》1997,7(2):199-226
John Searle's Chinese room argument is perhaps the most influential andwidely cited argument against artificial intelligence (AI). Understood astargeting AI proper – claims that computers can think or do think– Searle's argument, despite its rhetorical flash, is logically andscientifically a dud. Advertised as effective against AI proper, theargument, in its main outlines, is an ignoratio elenchi. It musterspersuasive force fallaciously by indirection fostered by equivocaldeployment of the phrase "strong AI" and reinforced by equivocation on thephrase "causal powers" (at least) equal to those of brains." On a morecarefully crafted understanding – understood just to targetmetaphysical identification of thought with computation ("Functionalism"or "Computationalism") and not AI proper the argument is still unsound,though more interestingly so. It's unsound in ways difficult for high church– "someday my prince of an AI program will come" – believersin AI to acknowledge without undermining their high church beliefs. The adhominem bite of Searle's argument against the high church persuasions of somany cognitive scientists, I suggest, largely explains the undeserved reputethis really quite disreputable argument enjoys among them.  相似文献   

4.
Artificial intelligence (AI) experts are currently divided into “presentist” and “futurist” factions that call for attention to near-term and long-term AI, respectively. This paper argues that the presentist–futurist dispute is not the best focus of attention. Instead, the paper proposes a reconciliation between the two factions based on a mutual interest in AI. The paper further proposes realignment to two new factions: an “intellectualist” faction that seeks to develop AI for intellectual reasons (as found in the traditional norms of computer science) and a “societalist faction” that seeks to develop AI for the benefit of society. The paper argues in favor of societalism and offers three means of concurrently addressing societal impacts from near-term and long-term AI: (1) advancing societalist social norms, thereby increasing the portion of AI researchers who seek to benefit society; (2) technical research on how to make any AI more beneficial to society; and (3) policy to improve the societal benefits of all AI. In practice, it will often be advantageous to emphasize near-term AI due to the greater interest in near-term AI among AI and policy communities alike. However, presentist and futurist societalists alike can benefit from each others’ advocacy for attention to the societal impacts of AI. The reconciliation between the presentist and futurist factions can improve both near-term and long-term societal impacts of AI.  相似文献   

5.
The research agendas of artificial intelligence and real-time systems are converging as AI methods move toward domains that require real-time responses, and real-time systems move toward complex applications that require intelligent behavior. They meet at the crossroads in an exciting new subfield commonly called “real-time AI.” This subfield is still being defined, and the precise goals for various real-time AI systems are in flux. Our goal is to identify promising areas for future research in both real-time and AI techniques. We describe an organizing conceptual structure for current real-time AI research, exploring the meanings this term has acquired. We then identify the goals of real-time AI research and specify some necessary steps for reaching them  相似文献   

6.
Robbins  Scott 《Minds and Machines》2019,29(4):495-514

There is widespread agreement that there should be a principle requiring that artificial intelligence (AI) be ‘explicable’. Microsoft, Google, the World Economic Forum, the draft AI ethics guidelines for the EU commission, etc. all include a principle for AI that falls under the umbrella of ‘explicability’. Roughly, the principle states that “for AI to promote and not constrain human autonomy, our ‘decision about who should decide’ must be informed by knowledge of how AI would act instead of us” (Floridi et al. in Minds Mach 28(4):689–707, 2018). There is a strong intuition that if an algorithm decides, for example, whether to give someone a loan, then that algorithm should be explicable. I argue here, however, that such a principle is misdirected. The property of requiring explicability should attach to a particular action or decision rather than the entity making that decision. It is the context and the potential harm resulting from decisions that drive the moral need for explicability—not the process by which decisions are reached. Related to this is the fact that AI is used for many low-risk purposes for which it would be unnecessary to require that it be explicable. A principle requiring explicability would prevent us from reaping the benefits of AI used in these situations. Finally, the explanations given by explicable AI are only fruitful if we already know which considerations are acceptable for the decision at hand. If we already have these considerations, then there is no need to use contemporary AI algorithms because standard automation would be available. In other words, a principle of explicability for AI makes the use of AI redundant.

  相似文献   

7.

This paper reflects my address as IAAIL president at ICAIL 2021. It is aimed to give my vision of the status of the AI and Law discipline, and possible future perspectives. In this respect, I go through different seasons of AI research (of AI and Law in particular): from the Winter of AI, namely a period of mistrust in AI (throughout the eighties until early nineties), to the Summer of AI, namely the current period of great interest in the discipline with lots of expectations. One of the results of the first decades of AI research is that “intelligence requires knowledge”. Since its inception the Web proved to be an extraordinary vehicle for knowledge creation and sharing, therefore it’s not a surprise if the evolution of AI has followed the evolution of the Web. I argue that a bottom-up approach, in terms of machine/deep learning and NLP to extract knowledge from raw data, combined with a top-down approach, in terms of legal knowledge representation and models for legal reasoning and argumentation, may represent a promotion for the development of the Semantic Web, as well as of AI systems. Finally, I provide my insight in the potential of AI development, which takes into account technological opportunities and theoretical limits.

  相似文献   

8.
Abstract

A few corporations and government agencies have begun applying artificial intelligence (AI) technology to business and logistical problems. If and when AI is incorporated into information processing systems, significant competitive opportunities may present themselves, but the return on an early AI investment can't be known yet. The six-stage model outlined in this article not only helps calculate the value of individual AI functions but also enables MIS management to anticipate AI's strategic potential.  相似文献   

9.
Meissner  Gunter 《AI & Society》2020,35(1):225-235
AI & SOCIETY - Our society is in the middle of the AI revolution. We discuss several applications of AI, in particular medical causality, where deep-learning neural networks screen through big...  相似文献   

10.
Human-level AI will be achieved, but new ideas are almost certainly needed, so a date cannot be reliably predicted—maybe five years, maybe five hundred years. I'd be inclined to bet on this 21st century.It is not surprising that human-level AI has proved difficult and progress has been slow—though there has been important progress. The slowness and the demand to exploit what has been discovered has led many to mistakenly redefine AI, sometimes in ways that preclude human-level AI—by relegating to humans parts of the task that human-level computer programs would have to do. In the terminology of this paper, it amounts to settling for a bounded informatic situation instead of the more general common sense informatic situation.Overcoming the “brittleness” of present AI systems and reaching human-level AI requires programs that deal with the common sense informatic situation—in which the phenomena to be taken into account in achieving a goal are not fixed in advance.We discuss reaching human-level AI, emphasizing logical AI and especially emphasizing representation problems of information and of reasoning. Ideas for reasoning in the common sense informatic situation include nonmonotonic reasoning, approximate concepts, formalized contexts and introspection.  相似文献   

11.
Perception is the interaction interface between an intelligent system and the real world. Without sophisticated and flexible perceptual capabilities, it is impossible to create advanced artificial intelligence (AI) systems. For the next-generation AI, called ‘AI 2.0’, one of the most significant features will be that AI is empowered with intelligent perceptual capabilities, which can simulate human brain’s mechanisms and are likely to surpass human brain in terms of performance. In this paper, we briefly review the state-of-the-art advances across different areas of perception, including visual perception, auditory perception, speech perception, and perceptual information processing and learning engines. On this basis, we envision several R&D trends in intelligent perception for the forthcoming era of AI 2.0, including: (1) human-like and transhuman active vision; (2) auditory perception and computation in an actual auditory setting; (3) speech perception and computation in a natural interaction setting; (4) autonomous learning of perceptual information; (5) large-scale perceptual information processing and learning platforms; and (6) urban omnidirectional intelligent perception and reasoning engines. We believe these research directions should be highlighted in the future plans for AI 2.0.  相似文献   

12.

Artificial intelligence (AI) is a fascinating new technology that incorporates machine learning and neural networks to improve existing technology or create new ones. Potential applications of AI are introduced to aid in the fight against colorectal cancer (CRC). This includes how AI will affect the epidemiology of colorectal cancer and the new methods of mass information gathering like GeoAI, digital epidemiology and real-time information collection. Meanwhile, this review also examines existing tools for diagnosing disease like CT/MRI, endoscopes, genetics, and pathological assessments also benefitted greatly from implementation of deep learning. Finally, how treatment and treatment approaches to CRC can be enhanced when applying AI is under discussion. The power of AI regarding the therapeutic recommendation in colorectal cancer demonstrates much promise in clinical and translational field of oncology, which means better and personalized treatments for those in need.

  相似文献   

13.
This paper argues that the conventional methodology of software engineering is inappropriate to AI, but that the failure of many in AI to see this is producing a Kuhnian paradigm crisis. The key point is that classic software engineering methodology (which we call SPIV: Specify-Prove-Implement-Verify) requires that the problem be capable of being circumscribed or surveyed in a way that it is not, for areas of AI, like natural language processing. In addition, it also requires that a program be open to formal proof of correctness. We contrast this methodology with a weaker form complete Specification And Testability (SAT — where the last term is used in a strong sense: every execution of the program gives decidably correct/incorrect results) which captures both the essence of SPIV and the key assumptions in practical software engineering. We argue that failure to recognize the inability to apply the SAT methodology to areas of AI has prevented development of a disciplined methodology (which is unique to AI and which we call RUDE: Run-Understand-Debug-Edit) that will accommodate the peculiarities of AI and also yield robust, reliable, comprehensible, and hence maintainable AI software.  相似文献   

14.
The idea of developing a system that can converse and understand human languages has been around since the 1200 s. With the advancement in artificial intelligence (AI), Conversational AI came of age in 2010 with the launch of Apple’s Siri. Conversational AI systems leveraged Natural Language Processing (NLP) to understand and converse with humans via speech and text. These systems have been deployed in sectors such as aviation, tourism, and healthcare. However, the application of Conversational AI in the architecture engineering and construction (AEC) industry is lagging, and little is known about the state of research on Conversational AI. Thus, this study presents a systematic review of Conversational AI in the AEC industry to provide insights into the current development and conducted a Focus Group Discussion to highlight challenges and validate areas of opportunities. The findings reveal that Conversational AI applications hold immense benefits for the AEC industry, but it is currently underexplored. The major challenges for the under exploration were highlighted and discusses for intervention. Lastly, opportunities and future research directions of Conversational AI are projected and validated which would improve the productivity and efficiency of the industry. This study presents the status quo of a fast-emerging research area and serves as the first attempt in the AEC field. Its findings would provide insights into the new field which be of benefit to researchers and stakeholders in the AEC industry.  相似文献   

15.

This paper reviews the current state of the art in artificial intelligence (AI) technologies and applications in the context of the creative industries. A brief background of AI, and specifically machine learning (ML) algorithms, is provided including convolutional neural networks (CNNs), generative adversarial networks (GANs), recurrent neural networks (RNNs) and deep Reinforcement Learning (DRL). We categorize creative applications into five groups, related to how AI technologies are used: (i) content creation, (ii) information analysis, (iii) content enhancement and post production workflows, (iv) information extraction and enhancement, and (v) data compression. We critically examine the successes and limitations of this rapidly advancing technology in each of these areas. We further differentiate between the use of AI as a creative tool and its potential as a creator in its own right. We foresee that, in the near future, ML-based AI will be adopted widely as a tool or collaborative assistant for creativity. In contrast, we observe that the successes of ML in domains with fewer constraints, where AI is the ‘creator’, remain modest. The potential of AI (or its developers) to win awards for its original creations in competition with human creatives is also limited, based on contemporary technologies. We therefore conclude that, in the context of creative industries, maximum benefit from AI will be derived where its focus is human-centric—where it is designed to augment, rather than replace, human creativity.

  相似文献   

16.

The proposed European Artificial Intelligence Act (AIA) is the first attempt to elaborate a general legal framework for AI carried out by any major global economy. As such, the AIA is likely to become a point of reference in the larger discourse on how AI systems can (and should) be regulated. In this article, we describe and discuss the two primary enforcement mechanisms proposed in the AIA: the conformity assessments that providers of high-risk AI systems are expected to conduct, and the post-market monitoring plans that providers must establish to document the performance of high-risk AI systems throughout their lifetimes. We argue that the AIA can be interpreted as a proposal to establish a Europe-wide ecosystem for conducting AI auditing, albeit in other words. Our analysis offers two main contributions. First, by describing the enforcement mechanisms included in the AIA in terminology borrowed from existing literature on AI auditing, we help providers of AI systems understand how they can prove adherence to the requirements set out in the AIA in practice. Second, by examining the AIA from an auditing perspective, we seek to provide transferable lessons from previous research about how to refine further the regulatory approach outlined in the AIA. We conclude by highlighting seven aspects of the AIA where amendments (or simply clarifications) would be helpful. These include, above all, the need to translate vague concepts into verifiable criteria and to strengthen the institutional safeguards concerning conformity assessments based on internal checks.

  相似文献   

17.
The advancement of artificial intelligence (AI) has truly stimulated the development and deployment of autonomous vehicles (AVs) in the transportation industry. Fueled by big data from various sensing devices and advanced computing resources, AI has become an essential component of AVs for perceiving the surrounding environment and making appropriate decision in motion. To achieve goal of full automation (i.e., self-driving), it is important to know how AI works in AV systems. Existing research have made great efforts in investigating different aspects of applying AI in AV development. However, few studies have offered the research community a thorough examination of current practices in implementing AI in AVs. Thus, this paper aims to shorten the gap by providing a comprehensive survey of key studies in this research avenue. Specifically, it intends to analyze their use of AIs in supporting the primary applications in AVs: 1) perception; 2) localization and mapping; and 3) decision making. It investigates the current practices to understand how AI can be used and what are the challenges and issues associated with their implementation. Based on the exploration of current practices and technology advances, this paper further provides insights into potential opportunities regarding the use of AI in conjunction with other emerging technologies: 1) high definition maps, big data, and high performance computing; 2) augmented reality (AR)/virtual reality (VR) enhanced simulation platform; and 3) 5G communication for connected AVs. This paper is expected to offer a quick reference for researchers interested in understanding the use of AI in AV research.   相似文献   

18.
It is becoming clear that, contrary to earlier expectations, the application of AI techniques to law is not as easy nor as effective as some claimed. Unfortunately, for most AI researchers, there seems to be little understanding of just why this is. In this paper I argue, from empirical study of lawyers in action, just why there is a mismatch between the AI view of law, and law in practice. While this is important and novel, it also — if my arguments are accepted — demonstrates just why AI will never have success in producing the computerised lawyer.This is a revised version of a paper presented at the annual conference of the Speech Communication Association, Atlanta, October/November 1991.Note  相似文献   

19.
AI & SOCIETY - The concept of agency as applied to technological artifacts has become an object of heated debate in the context of AI research because some AI researchers ascribe to programs...  相似文献   

20.
The authors review and categorize the research in applications of artificial intelligence (AI) and expert systems (ES) in new product development (NPD) activities. A brief overview of NPD process and AI is presented. This is followed by a literature survey in regard to AI and ES applications in NPD, which revealed twenty four articles (twenty two applications) in the 1990–1997 period. The applications are categorized into five areas: expert decision support systems for NPD project evaluation, knowledge-based systems (KBS) for product and process design, KBS for QFD, AI support for conceptual design and AI support for group decision making in concurrent engineering. Brief review of each application is provided. The articles are also grouped by NPD stages and seven NPD core elements (competencies and abilities). Further research areas are pointed out.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号