首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 125 毫秒
1.
Maurice Berix 《AI & Society》2012,27(1):165-172
Engaging the public in decision-making processes is commonly accepted as an effective strategy for a better policy making, a better policy support and for narrowing the gap between government and the public. In today’s digitised society, participation via online media is becoming more important. But is this so-called e-participation being used optimally? Or is a better design possible? In my opinion, the answer to these questions is a ‘yes’. Despite numerous efforts in engaging the public with policy deliberation, the actual amount of participants remains low. In this article, I have used the YUTPA model (Nevejan 2009) to analyse some existing e-participation projects. Additionally, I derived ten characteristics of ‘play’ to make proposals for a more designerly e-participation approach.  相似文献   

2.
Integrated operations (IO) is an operating mode in the offshore oil and gas industry that is expected to lead to safer, faster and better operations. This article presents an analysis of the anticipated impacts of increased instrumentation on the safety of drilling operations. The instrumentation is related to the change process of IO, and is exemplified by a group of IO tools for interpretation, diagnosis and automation. An important finding in the study is the identification of a set of controversies that reflect characteristic challenges of drilling operations. These controversies involve the quantity and accessibility of information, the issue of centralized and decentralized control, the relation between standardized and unique interpretation of data, and the heterogeneous nature of engineering work. It is argued that the impact of the IO tools on safety will depend on how these controversies are taken into account when the tools are adopted. It is also argued that the cognitive control of the operations is distributed across a range of human and nonhuman actors and that the impact of the IO tools thus depends on how they are adapted to the system of distributed cognition rather than on the properties of the tools themselves.  相似文献   

3.
With MaTRICS, we describe a service-oriented architecture that allows remotely connected users to modify the configuration of any service provided by a specific (application) server, like email-, news- or web-servers. Novel to our approach is that the system can manage configuration processes on heterogeneous software- and hardware-platforms, which are performed from a variety of peripherals unmatched in today’s practice, where devices like mobile phones, faxes, PDAs are enabled to be used by system managers as remote system configuration and management tools.  相似文献   

4.
5.
Authoring of multimedia content can be considered as composing media assets such as images, videos, text, and audio in time, space, and interaction into a coherent multimedia presentation. Personalization of such content means that it reflects the users’ or user groups’ profile information and context information. Enriching the multimedia content with semantically rich metadata allows for a better search and retrieval of the content. To actually create personalized semantically-rich multimedia content, a manual authoring of the many different documents for all the different users’ and user groups’ needs is not feasible. Rather a (semi-)automatic authoring of the content seems reasonable. We have analyzed in detail today’s approaches and systems for authoring, personalizing, and semantically enriching multimedia presentations. Based on this analysis, we derived a general creation chain for the (semi-)automatic generation of such content. In this paper, we introduce this creation chain. We present our software engineering support for the chain, the component framework SemanticMM4U. The canonical processes supported by the creation chain and SemanticMM4U framework are described in detail. We also provide an explicit mapping of SemanticMM4U framework components to the processes and argue for the benefits of defining canonical processes for creating personalized semantically rich multimedia presentations.  相似文献   

6.
A review of the current air traffic control system is undertaken from the perspective of human centered design, focusing on the development of today’s system, the problems in today’s system, and the challenges going forward. Today’s system evolved around the operators in the system (mainly air traffic controllers and pilots), rather than being designed based on specific engineering analyses. This human centered focus has helped make air transportation remarkably safe, but has also made the air traffic control system somewhat inscrutable. This opaqueness of how the system operates poses significant problems for current attempts to transform the system into its “next generation” with significantly improved capacity. Research advances in human centered computing research required in order for this transformation work to proceed are discussed, specifically advances in computing the safety of complex human-integrated systems, understanding and measuring situation awareness, and visualizations of complex data.  相似文献   

7.
The present study, based on a comparative analysis of several plans for Lisbon’s Baixa district, with an emphasis on that area’s public space, contributes to an understanding of the urban design process and presents a fresh perspective on dealing with historical data by conducting a posteriori analysis using mathematical tools to uncover relations in the historical data. The nine plans used were quantified and evaluated in a comparative manner. While CAD was used to quantify the urban morphology of the different plans, comparative tables make it possible to register the data, which was further evaluated through two interrelated processes: mathematical analysis and the urban analysis. The results show the existence of power law relations for the areas of each of the city’s different elements (e.g., blocks, churches, largos and adros). We discuss how this contributes to the understanding of the plans’ elements.  相似文献   

8.
Ugo Pagallo 《AI & Society》2011,26(4):347-354
This paper adopts a legal perspective to counter some exaggerations of today’s debate on the social understanding of robotics. According to a long and well-established tradition, there is in fact a relative strong consensus among lawyers about some key notions as, say, agency and liability in the current use of robots. However, dealing with a field in rapid evolution, we need to rethink some basic tenets of the contemporary legal framework. In particular, time has come for lawyers to acknowledge that some acts of robots should be considered as a new source of legal responsibility for others’ behaviour.  相似文献   

9.
It has been argued that ethically correct robots should be able to reason about right and wrong. In order to do so, they must have a set of do’s and don’ts at their disposal. However, such a list may be inconsistent, incomplete or otherwise unsatisfactory, depending on the reasoning principles that one employs. For this reason, it might be desirable if robots were to some extent able to reason about their own reasoning—in other words, if they had some meta-ethical capacities. In this paper, we sketch how one might go about designing robots that have such capacities. We show that the field of computational meta-ethics can profit from the same tools as have been used in computational metaphysics.  相似文献   

10.
Conclusions It is asserted that current approaches and automated support for requirements engineering are not yet sufficient to build today’s and tomorrow’s complex systems. Requirements engineering, itself intricately connected to system design and system solution and not separate from either, needs to be embedded into a total systems engineering approach. This is the route to systems engineering maturity. Software and systems engineering can and should learn from each other.  相似文献   

11.
In today’s digital information age, companies are struggling with an immense overload of mainly unstructured data. Reducing search times, fulfilling compliance requirements and maintaining information quality represent only three of the challenges that organisations from all industry sectors are faced with. Enterprise content management (ECM) has emerged as a promising approach addressing these challenges. Yet, there are still numerous obstacles to the implementation of ECM technologies, particularly fostered by the fact that the key challenges of ECM adaptation processes are rather organisational than technological. In the present article we claim that the consideration of an organisation’s business process structure is particularly crucial for ECM success. In response to this, we introduce a process-oriented conceptual framework that systematises the key steps of an ECM adoption. The paper suggests that ECM and business process management are two strongly related fields of research.  相似文献   

12.
In today’s competitive market designing of digital systems (hardware as well as software) faces tremendous challenges. In fact, notwithstanding an ever decreasing project budget, time to market and product lifetime, designers are faced with an ever increasing system complexity and customer expected quality. The above situation calls for better and better formal verification techniques at all steps of the design flow. This special issue is devoted to publishing revised versions of contributions first presented at the 12th Advanced Research Working Conference on Correct Hardware Design and Verification Methods (CHARME) held 21–24 October 2003 in L’Aquila, Italy. Authors of well regarded papers from CHARME’03 were invited to submit to this special issue. All papers included here have been suitably extended and have undergone an independent round of reviewing.  相似文献   

13.
The avalanche of data from scientific instruments and the ensuing interest from geographically distributed users to analyze and interpret it accentuates the need for efficient data dissemination. A suitable data distribution scheme will find the delicate balance between conflicting requirements of minimizing transfer times, minimizing the impact on the network, and uniformly distributing load among participants. We identify several data distribution techniques, some successfully employed by today’s peer-to-peer networks: staging, data partitioning, orthogonal bandwidth exploitation, and combinations of the above. We use simulations to explore the performance of these techniques in contexts similar to those used by today’s data-centric scientific collaborations and derive several recommendations for efficient data dissemination. Our experimental results show that the peer-to-peer solutions that offer load balancing and good fault tolerance properties and have embedded participation incentives lead to unjustified costs in today’s scientific data collaborations deployed on over-provisioned network cores. However, as user communities grow and these deployments scale, peer-to-peer data delivery mechanisms will likely outperform other techniques.  相似文献   

14.
Complex software and systems are pervasive in today’s world. In a growing number of fields they come to play a critical role. In order to provide a high assurance level, verification and validation (V&V) should be considered early in the development process. This paper shows how this can be achieved based on a goal-oriented requirements engineering framework which combines complementary semi-formal and formal notations. This allows the analyst to formalize only when and where needed and also preserves optimal communication with stakeholders and developers. For the industrial application of the methodology, a supporting toolbox was developed. It consist of a number of tightly integrated tools for performing V&V tasks at requirements level. This is achieved through the use of (1) a roundtrip mapping between the requirements language and the specific formal languages used in the underlying formal tools (such as SAT or constraint solvers) and (2) graphical views using domain-based representations. This paper will focus on two major and representative tools: the Refinement Checker (about verification) and the Animator (about validation).  相似文献   

15.
The importance of reporting is ever increasing in today’s fast-paced market environments and the availability of up-to-date information for reporting has become indispensable. Current reporting systems are separated from the online transaction processing systems (OLTP) with periodic updates pushed in. A pre-defined and aggregated subset of the OLTP data, however, does not provide the flexibility, detail, and timeliness needed for today’s operational reporting. As technology advances, this separation has to be re-evaluated and means to study and evaluate new trends in data storage management have to be provided. This article proposes a benchmark for combined OLTP and operational reporting, providing means to evaluate the performance of enterprise data management systems for mixed workloads of OLTP and operational reporting queries. Such systems offer up-to-date information and the flexibility of the entire data set for reporting. We describe how the benchmark provokes the conflicts that are the reason for separating the two workloads on different systems. In this article, we introduce the concepts, logical data schema, transactions and queries of the benchmark, which are entirely based on the original data sets and real workloads of existing, globally operating enterprises.  相似文献   

16.
Summary.  The computational power of concurrent data types has been the focus of much recent research. Herlihy showed that such power may be measured by the type’s ability to implement wait-free consensus. Jayanti argued that this ability could be measured in different ways, depending, for example, on whether or not read/write registers could be used in an implementation. He demonstrated the significance of this distinction by exhibiting a nondeterministic type whose ability to implement consensus was increased with the availability of registers. We show that registers cannot increase the ability to implement wait-free consensus of any deterministic type or of any type that can, without them, implement consensus for at least two processes. These results significantly impact the study of the wait-free hierarchies of concurrent data types. In particular, the combination of these results with other recent work suggests that Jayanti’s h m hierarchy is robust for certain classes of deterministic types.  相似文献   

17.
To solve today’s ecological problems, scientists need well documented, validated, and coherent data archives. Historically, however, ecologists have collected and stored data idiosyncratically, making data integration even among close collaborators difficult. Further, effective ecology data warehouses and subsequent data mining require that individual databases be accurately described with metadata against which the data themselves have been validated. Using database technology would make documenting data sets for archiving, integration, and data mining easier, but few ecologists have expertise to use database technology and they cannot afford to hire programmers. In this paper, we identify the benefits that would accrue from ecologists’ use of modern information technology and the obstacles that prevent that use. We describe our prototype, the Canopy DataBank, through which we aim to enable individual ecologists in the forest canopy research community to be their own database programmers. The key feature that makes this possible is domain-specific database components, which we call templates. We also show how additional tools that reuse these components, such as for visualization, could provide gains in productivity and motivate the use of new technology. Finally, we suggest ways in which communities might share database components and how components might be used to foster easier data integration to solve new ecological problems.  相似文献   

18.
International Journal on Document Analysis and Recognition (IJDAR) - The importance of automated document understanding in terms of today’s businesses’ speed, efficiency, and cost...  相似文献   

19.
Peter Gena 《AI & Society》2012,27(2):197-205
Numbers have been identified with symbolic data forever. The profound association of both with acoustics, music, and sonic art from Pythagoras to current work is beyond reproach. Recently, sonification looks for ways to realize symbolic data (representing results or measurements) as well as “raw” data (signals, impulses, images, etc.) into compositions. In the strictest sense, everything in a computer is symbolic, that is, represented by 0s and 1s. In the arts, the digital age has broadened and enhanced the conceptual landscape not simply through its servitude to the creative process, but as its partner. However, there is a rich history of the use of data that no doubt has paved the way for many of today’s experiments including my own.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号