首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 536 毫秒
1.
Abstract

Inspired by a type of synesthesia where colour typically induces musical notes the MusiCam project investigates this unusual condition, particularly the transition from colour to sound. MusiCam explores the potential benefits of this idiosyncrasy as a mode of human computer interaction (HCI), providing a host of meaningful applications spanning control, communication and composition. Colour data is interpreted by means of an off-the-shelf webcam, and music is generated in real-time through regular speakers. By making colour-based gestures users can actively control the parameters of sounds, compose melodies and motifs or mix multiple tracks on the fly. The system shows great potential as an interactive medium and as a musical controller. The trials conducted to date have produced encouraging results, and only hint at the new possibilities achievable by such a device.  相似文献   

2.
NURBS (non-uniform rational b-spline) modelling has become a ubiquitous tool within architectural design praxis. In this article I examine three projects that utilise NURBS modelling as a means for which a musical system's inherent spatiality is visualised. There are numerous precedents for which architectural form is a derivation of a musical system, or a musical system is proportionally informed by architectonic gesture. I propose in this article three NURBS modelling methodologies: for the spatial analysis of Karlheinz Stockhausen's sound projection geometries in Pole für 2; for a spatial realisation of John Cage's indeterminate work Variations III; and for the generation of a surface manifold informed by musically derived soundscape data from the Japanese garden Kyu Furukawa Teien. Rather than seeking to translate music into inhabitable architecture, or architectonic form into music, I highlight an approach that produces an interstitial territory between discourses on architecture and music analysis.  相似文献   

3.
ABSTRACT

Singing, like dance, emerges directly from the body. The voice, in combination with whole body movement, constitutes a potent form of self-expression. Gestural systems offer a specialized context in which to explore the intersection between voice and movement. The practice-based investigation presented in this article charts the development of an original musical work, Intangible Spaces, which gives form to the invisible aspects of voice and movement through gestural control, physical modelling synthesis and visual feedback. I draw on embodied and performative autoethnographic methods to capture the felt sensations and sound-movement associations that arise during the composition process. I also explore the performance approaches of key practitioners in the area to gain a broader understanding of the ways in which musicians leverage existing performance skills to uncover novel connections between movement and voice in gestural performance.  相似文献   

4.
Unwind is a musical biofeedback interface which combines nature sounds and sedative music into a form of New-Age music for relaxation exercises. The nature sounds respond to the user’s physiological data, functioning as an informative layer for biofeedback display. The sedative music aims to induce calmness and evoke positive emotions. UnWind incorporates the benefits of biofeedback and sedative music to facilitate deep breathing, moderate arousal, and promote mental relaxation. We evaluated Unwind in a 2?×?2 factorial experiment with music and biofeedback as independent factors. Forty young adults performed the relaxation exercise under one of the following conditions after experiencing a stressful task: Nature sounds only (NS), Nature sounds with music (NM), and Auditory biofeedback with nature sounds (NSBFB), and UnWind musical biofeedback (NMBFB). The results revealed a significant interaction effect between music and biofeedback on the improvement of heart rate variability. The combination of music and nature sounds also showed benefits in lowering arousal and reducing self-report anxiety. We conclude with a discussion of UnWind for biofeedback and the wider potential of blending nature sounds with music as a musical interface.  相似文献   

5.
Abstract

The article discusses the role of technology in integrating acoustic instruments within soundscape composition in a live and interactive context, thus encouraging an engagement with the sonic environment. The approaches to two compositions that combine instruments with live electronics and pre-composed soundscapes – Cold Wood (bass trombone) and Arcando (alto saxophone) – are considered. These pieces create unique live performance scenarios in order to investigate how acoustic instruments might be positioned within, and reframed by, soundscape composition practice. Potential pitfalls inherent in mixed media works are explored, proposing technological solutions through sound processing and use of microphones, whilst ensuring that 'liveness' remains more aesthetic than procedural. Aspects of field recording practice influence the behaviour of the instruments, and improvisation is encouraged in both pieces to ensure unpredictability. Consequently, it is also proposed that the process of listening to, capturing, and processing the sound of the instrument in a live performance scenario is analogous to the practice of field recording, in which there is little control over the sound events.  相似文献   

6.
Recent advances in physics-based sound synthesis have unveiled numerous possibilities for the creation of new musical instruments. Despite the fact that research on physics-based sound synthesis has been going on for three decades, its higher computational complexity compared to that of signal modeling has limited its use in real-time applications. This limitation has motivated research on parallel processing architectures that support the physics-based sound synthesis of musical instruments. In this paper, we present analytical results of the design space exploration of many-core processors for the physics-based sound synthesis of plucked-string instruments including acoustic guitar, classical guitar and the gayageum, which is representative of a Korean plucked-string instrument. We do so by quantitatively evaluating the significance of a sample-per-processing-element (SPE) ratio–i.e., the amount of sample data directly mapped to each processing element, which is equivalent to varying the number of processing elements for a fixed sample size on system performance and efficiency using architectural and workload simulations. The effect of the sample-to-processor ratio is difficult to analyze because it fundamentally affects both hardware and software design when varied. In addition, the optimal SPE ratio is not typically at either extreme of its range–i.e., one sample per processor or one processor per an entire sample. This paper illustrates the correlation between a fixed problem sample size, SPE ratio and processing element (PE) architecture for a target implementation in 130-nm CMOS technology. Experimental results indicate that an SPE in the range of 5513 to 2756, which is equivalent to 48 to 96 PEs for guitars and 96 to 192 PEs for the gayageum, provides the most efficient operation for the synthesis of musical sounds sampled at 44.1 kHz, yielding the highest task throughput per unit area or per unit energy. In addition, the produced synthesized sounds appear to be very similar to the original sounds, and the selected optimal many-core configurations outperform commercial processor architectures including DSPs, FPGAs, and GPUs in terms of area efficiency and energy efficiency.  相似文献   

7.
Abstract

While rich support for a wide variety of media such as text, video and image is common among contemporary hypermedia systems, so too is the inadequate support for audio. The primary reason that audio has not attracted as much attention as other media can be attributed to its obvious lack of visual identity. The main focus of this work was to identify a generic and meaningful visual representation of audio within a hypermedia context, and significantly promote hypermedia support for audio through the provision of a sound viewer.

This paper describes the inherent difficulties in providing a consistent interface to audio, and discusses in some depth the issues raised during the development process. The sound viewer is then introduced and the associated concepts described. The creation and traversal of links to and from audio are facilitated by the sound viewer across formats including WAV (proprietary digital sound file format from Microsoft), CD (Compact Disc) Audio and MIDI (Musical Instrument Digital Interface). The resultant viewer provides a unified and extensible framework for interacting with audio from within an open hypermedia environment. The open hypermedia system Microcosm was used as the development platform for this work. Microcosm can be augmented to supply a hypermedia link service to additional media with minimal overhead.  相似文献   

8.
9.
The current concept of robots has been greatly influenced by the image of robots from science fiction. Since robots were introduced into human society as partners with them, the importance of human–robot interaction has grown. In this paper, we have designed seven musical sounds, five of which express intention and two that express emotion for the English teacher robot, Silbot. To identify the sound design considerations, we analyzed the sounds of robots, R2-D2 and Wall-E, from two popular movies, Star Wars and Wall-E, respectively. From the analysis, we found that intonation, pitch, and timbre are dominant musical parameters to express intention and emotion. To check the validity of these designed sounds for intention and emotion, we performed a recognition rate experiment. The experiment showed that the five designed sounds for intentions and the two for emotions are sufficient to deliver the intended emotions.  相似文献   

10.
As music can be represented symbolically, most of the existing methods extend some string matching algorithms to retrieve musical patterns in a music database. However, not all retrieved patterns are perceptually significant because some of them are, in fact, inaudible. Music is perceived in groupings of musical notes called streams. The process of grouping musical notes into streams is called stream segregation. Stream-crossing musical patterns are perceptually insignificant and should be pruned from the retrieval results. This can be done if all musical notes in a music database are segregated into streams and musical patterns are retrieved from the streams. Findings in auditory psychology are utilized in this paper, in which stream segregation is modelled as a clustering process and an adapted single-link clustering algorithm is proposed. Supported by experiments on real music data, streams are identified by the proposed algorithm with considerable accuracy.
Man Hon WongEmail:
  相似文献   

11.
    
While there are many parallels between computing activities in musicology and those in other humanities disciplines, the particular nature of musical material and the ways in which this must be accommodated set many activities apart from those in text-based disciplines. As in other disciplines, early applications were beset by hardware constraints, which placed a premium on expertise and promoted design-intensive projects. Massive musical encoding and bibliographical projects were initiated. Diversification of hardware platforms and languages in the Seventies led to task-specific undertakings, including preliminary work on many of today's programs for music printing and analysis. The rise of personal computers and associated general-purpose software in the Eighties has enabled many scholars to pursue projects individually, particularly with the assistance of database, word processing, and notation software. Current issues facing the field include the need for standards for data interchange, the creation of banks of reusable data, the establishment of qualitative standards for encoded data, and the encouragement of realistic appraisals of what computers can do. The musicologist Eleanor Selfridge-Field, who is the author of three books on Italian music and numerous articles, editions, and reviews, has worked at CCARH since its founding in 1984. Her most recent book, The Music of Benedetto and Alessandro Marcello (Oxford: Clarendon Press, 1990), which contains 1300 musical examples, was produced from camera-ready copy supplied by CCARH. Drs. Hewlett and Selfridge-Field jointly edit the series Computing in Musicology, which is published by CCARH, and co-chair the International Musicological Society's Study Group on Musical Data and Computer Applications. Walter B. Hewlett, the founder and director of the Center for Computer Assisted Research in the Humanities, holds degrees in physics, engineering science, and operations research in addition to a doctorate in music. He is the designer of the input, storage, and retrieval system for musical information that is in active use at CCARH for the encoding of the complete works of J. S. Bach, Handel, Mozart and other composers.  相似文献   

12.
The research presented in this paper focuses on global tempo transformations of monophonic audio recordings of saxophone jazz performances. We are investigating the problem of how a performance played at a particular tempo can be rendered automatically at another tempo, while preserving naturally sounding expressivity. Or, differently stated, how does expressiveness change with global tempo. Changing the tempo of a given melody is a problem that cannot be reduced to just applying a uniform transformation to all the notes of a musical piece. The expressive resources for emphasizing the musical structure of the melody and the affective content differ depending on the performance tempo. We present a case-based reasoning system called TempoExpress for addressing this problem, and describe the experimental results obtained with our approach. Editor: Gerhard Widmer  相似文献   

13.
We built a limited but successful user interface management system named HYPE which supports rapid interactive creation and organization of user interfaces for a large class of applications. HYPE is targeted at applications for which the user interface is only loosely coupled to the application. Examples of this class of application are ‘command line-driven’ programs. Many applications in this class can be quickly given satisfactory direct-manipulation interfaces with little or no reprogramming of the application. The programmer need only be familiar with HYPE, and not with the particular windowing system upon which it sits.
  • 1 The appearance of the interface is specified interactively through the direct manipulation of interface components.
  • 2 The behaviour of the interface is programmed with an interpreted procedural language which can send and receive messages and invoke system services. In particular, it can execute applications.
  • 3 The structure of the interface is a tree of potentially-visible objects which communicate with the user, the system, and each other through message passing. The tree structure facilitates grouping interfaces for related applications, or families of applications, into a single master interface.
Visual layout, tree-building, behaviour assignment (programming) and execution of the interface all occur within HYPE, a conjunction that makes it a powerful prototyping tool.  相似文献   

14.
Hand gestures have great potential to act as a computer interface in the entertainment environment. However, there are two major problems when implementing the hand gesture-based interface for multiple users, the complexity problem and the personalization problem. In order to solve these problems and implement multi-user data glove interface successfully, we propose an adaptive mixture-of-experts model for data-glove based hand gesture recognition models which can solve both the problems.The proposed model consists of the mixture-of-experts used to recognize the gestures of an individual user, and a teacher network trained with the gesture data from multiple users. The mixture-of-experts model is trained with an expectation-maximization (EM) algorithm and an on-line learning rule. The model parameters are adjusted based on the feedback received from the real-time recognition of the teacher network.The model is applied to a musical performance game with the data glove (5DT Inc.) as a practical example. Comparison experiments using several representative classifiers showed both outstanding performance and adaptability of the proposed method. Usability assessment completed by the users while playing the musical performance game revealed the usefulness of the data glove interface system with the proposed method.  相似文献   

15.
In this paper, we present our approach towards designing and implementing a virtual 3D sound sculpting interface that creates audiovisual results using hand motions in real time. In the interface “Virtual Pottery,” we use the metaphor of pottery creation in order to adopt the natural hand motions to 3D spatial sculpting. Users can create their own pottery pieces by changing the position of their hands in real time, and also generate 3D sound sculptures based on pre-existing rules of music composition. The interface of Virtual Pottery can be categorized by shape design and camera sensing type. This paper describes how we developed the two versions of Virtual Pottery and implemented the technical aspects of the interfaces. Additionally, we investigate the ways of translating hand motions into musical sound. The accuracy of the detection of hand motions is crucial for translating natural hand motions into virtual reality. According to the results of preliminary evaluations, the accuracy of both motion-capture tracking system and portable depth sensing camera is as high as the actual data. We carried out user studies, which took into account information about the two exhibitions along with the various ages of users. Overall, Virtual Pottery serves as a bridge between the virtual environment and traditional art practices, with the consequence that it can lead to the cultivation of the deep potential of virtual musical instruments and future art education programs.  相似文献   

16.
We propose a storage efficient, fast and parallelizable out-of-core framework for streaming computations of high resolution level sets. The fundamental techniques are skewing and tiling transformations of streamed level set computations which allow for the combination of interface propagation, re-normalization and narrow-band rebuild into a single pass over the data stored on disk. When combined with a new data layout on disk, this improves the overall performance when compared to previous streaming level set frameworks that require multiple passes over the data for each time-step. As a result, streaming level set computations are now CPU bound and consequently the overall performance is unaffected by disk latency and bandwidth limitations. We demonstrate this with several benchmark tests that show sustained out-of-core throughputs close to that of in-core level set simulations.  相似文献   

17.

The support vector machine (SVM) is a popular classification model for speaker verification. However, although SVM is suitable for classifying speakers, the uncertain values of the free parameters C and γ of the SVM model have been a challenging technique problem. An improper value set provided for the free parameter pair (C, γ) can cause dissatisfactory performance in the recognition accuracy of speaker verification. Moreover, the sound source localization information of the collected acoustic data has a large effect on the recognition performance of SVM speaker verification. In response, this study developed a sound source localization-driven fuzzy scheme to help determine the optimal value set of (C, γ) for the establishment of an SVM model. Specifically, this scheme adopts the estimated information of time difference of arrival (TDOA) derived from the Kinect microphone array (containing both the angle and distance information of the acoustic data of the speaker), to optimally calculate the value set of the SVM free parameters C and γ. It was demonstrated that speaker verification using the SVM with a properly estimated parameter pair (C, γ) is more accurate than that with only an arbitrarily given value set for the parameter pair (C, γ) on recognition rate.

  相似文献   

18.
To listen to brain activity as a piece of music, we proposed the scale-free brainwave music (SFBM) technology, which could translate the scalp electroencephalogram (EEG) into music notes according to the power law of both EEG and music. In the current study, this methodology was further extended to a musical ensemble of two channels. First, EEG data from two selected channels are translated into musical instrument digital interface (MIDI) sequences, where the EEG parameters modulate the pitch, duration, and volume of each musical note. The phase synchronization index of the two channels is computed by a Hilbert transform. Then the two MIDI sequences are integrated into a chorus according to the phase synchronization index. The EEG with a high synchronization index is represented by more consonant musical intervals, while the low index is expressed by inconsonant musical intervals. The brain ensemble derived from real EEG segments illustrates differences in harmony and pitch distribution during the eyes-closed and eyes-open states. Furthermore, the scale-free phenomena exist in the brainwave ensemble. Therefore, the scale-free brain ensemble modulated by phase synchronization is a new attempt to express the EEG through an auditory and musical way, and it can be used for EEG monitoring and bio-feedback.  相似文献   

19.
The H control problem for memristive neural networks with aperiodic sampling and actuator saturation is considered in this paper. A novel approach that is combined with the discrete‐time Lyapunov theorem and sampled‐data system is proposed to cope with the aperiodic sampling problem. On the basis of such method and choosing a polyhedral set, sufficient conditions to determine the ellipsoidal region of asymptotic stability and exponential stability for the estimation error system are obtained through a saturating sampled‐data control. Furthermore, H performance index of memristive neural networks with disturbance is also analyzed, whereas the observer and controller gains are calculated from stability conditions of linear matrix inequalities. Finally, the effectiveness of the theoretical results is illustrated through the numerical examples.  相似文献   

20.
A method to solve weakly non-linear partial differential equations with Volterra series is presented in the context of single-input systems. The solution x(z,t) is represented as the output of a z-parameterized Volterra system, where z denotes the space variable, but z could also have a different meaning or be a vector. In place of deriving the kernels from purely algebraic equations as for the standard case of ordinary differential systems, the problem turns into solving linear differential equations. This paper introduces the method on an example: a dissipative Burgers'equation which models the acoustic propagation and accounts for the dominant effects involved in brass musical instruments. The kernels are computed analytically in the Laplace domain. As a new result, writing the Volterra expansion for periodic inputs leads to the analytic resolution of the harmonic balance method which is frequently used in acoustics. Furthermore, the ability of the Volterra system to treat other signals constitutes an improvement for the sound synthesis. It allows the simulation for any regime, including attacks and transients. Numerical simulations are presented and their validity are discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号