Journal articles on the topic 'Auditory display'

To see the other types of publications on this topic, follow the link: Auditory display.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Auditory display.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Storms, Russell L., and Michael J. Zyda. "Interactions in Perceived Quality of Auditory-Visual Displays." Presence: Teleoperators and Virtual Environments 9, no. 6 (December 2000): 557–80. http://dx.doi.org/10.1162/105474600300040385.

Full text
Abstract:
The quality of realism in virtual environments (VEs) is typically considered to be a function of visual and audio fidelity mutually exclusive of each other. However, the VE participant, being human, is multimodal by nature. Therefore, in order to validate more accurately the levels of auditory and visual fidelity that are required in a virtual environment, a better understanding is needed of the intersensory or crossmodal effects between the auditory and visual sense modalities. To identify whether any pertinent auditory-visual cross-modal perception phenomena exist, 108 subjects participated in three experiments which were completely automated using HTML, Java, and JavaScript programming languages. Visual and auditory display quality perceptions were measured intraand intermodally by manipulating the pixel resolution of the visual display and Gaussian white noise level, and by manipulating the sampling frequency of the auditory display and Gaussian white noise level. Statistically significant results indicate that high-quality auditory displays coupled with highquality visual displays increase the quality perception of the visual displays relative to the evaluation of the visual display alone, and that low-quality auditory displays coupled with high-quality visual displays decrease the quality perception of the auditory displays relative to the evaluation of the auditory display alone. These findings strongly suggest that the quality of realism in VEs must be a function of both auditory and visual display fidelities inclusive of each other.
APA, Harvard, Vancouver, ISO, and other styles
2

Eldridge, Alice. "Issues in Auditory Display." Artificial Life 12, no. 2 (January 2006): 259–74. http://dx.doi.org/10.1162/artl.2006.12.2.259.

Full text
Abstract:
Auditory displays have been successfully deployed to assist data visualization in many areas, but have as yet received little attention in the field of artificial life. This article presents an overview of existing design approaches to auditory display and highlights some of the key issues that are of practical relevance to the use of auditory displays in artificial life research. Examples from recent experiments are used to illustrate the importance of considering factors such as data characteristics, data-display mappings, perceptual interactions within and between display modalities, and user experience and training in designing new visualization tools. It is concluded that while further research is needed to develop generic design principles for auditory display, this should not stand in the way of exploration of bespoke designs for specific applications.
APA, Harvard, Vancouver, ISO, and other styles
3

Shinn-Cunningham, Barbara G., and Timothy Streeter. "Spatial auditory display." ACM Transactions on Applied Perception 2, no. 4 (October 2005): 426–29. http://dx.doi.org/10.1145/1101530.1101537.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

King, Robert A., and Gregory M. Corso. "Auditory Displays: If They are so Useful, Why are they Turned Off?" Proceedings of the Human Factors and Ergonomics Society Annual Meeting 37, no. 9 (October 1993): 549–53. http://dx.doi.org/10.1177/154193129303700907.

Full text
Abstract:
Pilots often turn off the auditory displays which are provided to improve their performance (Weiner, 1977; Veitengruber, Boucek, & Smith, 1977). The intensity of the auditory display is often cited as a possible cause of this behavior (Cooper, 1977). However, the processing of the additional information is a concurrent task demand which may increase subjective workload (Wickens & Yeh, 1983; McCloy, Derrick, & Wickens, 1983). Pilots may attempt to reduce subjective workload at the expense of performance by turning off the auditory display. Forty undergraduate males performed a visual search task. Three conditions: auditory display on, auditory display off, and subject's choice were run in combination with nine levels of visual display load. The auditory display, a 4000 Hz tone with a between-subject intensity of 60 dB(A), 70 dB(A), 80 dB(A), and 90 dB(A), indicated that the target letter was in the lower half of the search area. NASA-TLX (Task Load Index) was used to measure the subjective workload of the subjects after each block of trials (Hart & Staveland, 1988). A non-monotonic relationship was found between auditory display intensity and auditory display usage. Evidence was found that the auditory display increased some aspects of subjective workload– physical demands and frustration. Furthermore, there was a dissociation of performance and subjective workload in the manner predicted by Wickens – Yeh (1983). The implications of these results for display design are discussed.
APA, Harvard, Vancouver, ISO, and other styles
5

Roddy, Stephen, and Dermot Furlong. "Embodied Aesthetics in Auditory Display." Organised Sound 19, no. 1 (February 26, 2014): 70–77. http://dx.doi.org/10.1017/s1355771813000423.

Full text
Abstract:
Aesthetics are gaining increasing recognition as an important topic in auditory display. This article looks to embodied cognition to provide an aesthetic framework for auditory display design. It calls for a serious rethinking of the relationship between aesthetics and meaning-making in order to tackle the mapping problem which has resulted from historically positivistic and disembodied approaches within the field. Arguments for an embodied aesthetic framework are presented. An early example is considered and suggestions for further research on the road to an embodied aesthetics are proposed. Finally a closing discussion considers the merits of this approach to solving the mapping problem and designing more intuitively meaningful auditory displays.
APA, Harvard, Vancouver, ISO, and other styles
6

Perkis, Tim, and Gregory Kramer. "Auditory Display: Sonification, Audification, and Auditory Interfaces." Computer Music Journal 19, no. 2 (1995): 110. http://dx.doi.org/10.2307/3680606.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Nemire, Kenneth. "Virtual Visual and Auditory Display Aids for a Peg Insertion Task." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 41, no. 2 (October 1997): 1263–67. http://dx.doi.org/10.1177/1071181397041002120.

Full text
Abstract:
Effects of visual and auditory display enhancements to a pick-and-place task performed in an immersive virtual environment were evaluated to determine whether the enhancements may replace depth information provided by stereoscopic visual displays. Participants used a commercial head-mounted display, spatial trackers on the head and hand, and a control wand. Independent variables included biocular or stereo viewing, movement amplitude, target diameter, and audio or visual enhancements. Dependent variables were movement time and number of discrete movements required to complete the task. Results indicated the stereo display and the display enhancements provided no performance advantages over the biocular display for the easier task conditions. Further, visual and auditory enhancements to the biocular display were found that resulted in performance that was not different from using stereoscopic displays. Implications of the results are discussed.
APA, Harvard, Vancouver, ISO, and other styles
8

Darkow, David J., and William P. Marshak. "In Search of an Objective Metric for Complex Displays." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 42, no. 19 (October 1998): 1361–65. http://dx.doi.org/10.1177/154193129804201907.

Full text
Abstract:
Advanced displays for military and other user-interaction intensive systems need objective measures of merit for analyzing the information transfer from the displays to the user. A usable objective metric for display interface designers needs to be succinct, modular and scaleable. The authors have combined the concepts of weighted Signal to Noise Ratio (SNR) and multidimensional correlation to calculate a novel index of display complexity. Preliminary data supporting the development of this metric for complex visual, auditory and mixed auditory and visual displays will be presented. Analysis of the human subject data indicates the coefficients for the algorithm are easily determined. Furthermore, the metric can predict reaction-times and accuracy rates for complex displays. This combination of semi-automated reduction of display information and calculation of a single complexity index makes this algorithm a potentially convenient tool for designers of complex display interfaces.
APA, Harvard, Vancouver, ISO, and other styles
9

Blattner, Meera M. "A Design for Auditory Display." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 44, no. 1 (July 2000): 219–22. http://dx.doi.org/10.1177/154193120004400159.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Ho, Anson, and Catherine Burns. "Music as an Auditory Display." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 57, no. 1 (September 2013): 1149–53. http://dx.doi.org/10.1177/1541931213571256.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Brungart, Douglas S. "Speech-based auditory distance display." Journal of the Acoustical Society of America 119, no. 4 (2006): 1915. http://dx.doi.org/10.1121/1.2195836.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Katz, Brian F. G., and Georgios Marentakis. "Advances in auditory display research." Journal on Multimodal User Interfaces 10, no. 3 (June 20, 2016): 191–93. http://dx.doi.org/10.1007/s12193-016-0226-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Martens, William L., and Michael Cohen. "Spatial Navigation by Seated Users of Multimodal Augmented Reality Systems." SHS Web of Conferences 102 (2021): 04022. http://dx.doi.org/10.1051/shsconf/202110204022.

Full text
Abstract:
When seated users of multimodal augmented reality (AR) systems attempt to navigate unfamiliar environments, they can become disoriented during their initial travel through a remote environment that is displayed for them via that AR display technology. Even when the multimodal displays provide mutually coherent visual, auditory, and vestibular cues to the movement of seated users through a remote environment (such as a maze), those users may make errors in judging their own orientation and position relative to their starting point, and also may have difficulty determining what moves to make in order to return themselves to their starting point. In a number of investigations using multimodal AR systems featuring realtime servocontrolled movement of seated users, the relative contribution of spatial auditory display technology was examined across a variety of spatial navigation scenarios. The results of those investigations have implications for the effective use of the auditory component of a multimodal AR system in applications supporting spatial navigation through a physical environment.
APA, Harvard, Vancouver, ISO, and other styles
14

Waldstein, Robin S., and Arthur Boothroyd. "Speechreading Supplemented by Single-Channel and Multichannel Tactile Displays of Voice Fundamental Frequency." Journal of Speech, Language, and Hearing Research 38, no. 3 (June 1995): 690–705. http://dx.doi.org/10.1044/jshr.3803.690.

Full text
Abstract:
The benefits of two tactile codes of voice fundamental frequency (F o ) were evaluated as supplements to the speechreading of sentences in two short-term training studies, each using 12 adults with normal hearing. In Experiment 1, a multichannel spatiotemporal display of F o , known as Portapitch, was used to stimulate the index finger. In an attempt to improve on past performance with this display, the coding scheme was modified to better cover the F o range of the talker in the training materials. For Experiment 2, to engage kinesthetic/proprioceptive pathways, a novel single-channel positional display was built, in which F o was coded as the vertical displacement of a small finger-rest. Input to both displays consisted of synthesized replicas of the F o contours of the sentences, prepared and perfected off-line. Training with the two tactile F o displays included auditory presentation of the synthesized F o contours in conjunction with the tactile patterns on alternate trials. Speechreading enhancement by the two tactile F o displays was compared to the enhancement provided when auditory F o information was available in conjunction with the tactile patterns, by auditory presentation of a sinusoidal indication of the presence or absence of voicing, and by a single-channel tactile display of the speech waveform presented to the index finger. Despite the modified coding strategy, the multichannel Portapitch provided a mean tactile speechreading enhancement of 7 percentage points, which was no greater than that found in previous studies. The novel positional F o display provided only a 4 percentage point enhancement. Neither F o display was better than the simple single-channel tactile transform of the full speech waveform, which gave a 7 percentage point enhancement effect. Auditory speechreading enhancement effects were 17 percentage points with the voicing indicator and approximately 35 percentage points when the auditory F o contour was provided in conjunction with the tactile displays. The findings are consistent with the hypothesis that subjects were not taking full advantage of the F o variation information available in the outputs of the two experimental tactile displays.
APA, Harvard, Vancouver, ISO, and other styles
15

Elias, Bartholomew. "The Effects of Spatial Auditory Preview on Dynamic Visual Search Performance." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 40, no. 23 (October 1996): 1227–31. http://dx.doi.org/10.1177/154193129604002317.

Full text
Abstract:
Since the auditory system is not spatially restricted like the visual system, spatial auditory cues can provide information regarding object position, velocity, and trajectory beyond the field of view. A laboratory experiment was conducted to demonstrate that visual displays can be augmented with dynamic spatial auditory cues that provide information regarding the motion characteristics of unseen objects. In this study, dynamic spatial auditory cues presented through headphones conveyed preview information regarding target position, velocity, and trajectory beyond the field of view in a dynamic visual search task in which subjects acquired and identified moving visual targets that traversed a display cluttered with varying numbers of moving distractors. The provision of spatial auditory preview significantly reduced response times to acquire and identify the visual targets and significantly reduced error rates, especially in cases when the visual display load was high. These findings demonstrate that providing dynamic spatial auditory preview cues is a viable mechanism for augmenting visual search performance in dynamic task environments.
APA, Harvard, Vancouver, ISO, and other styles
16

Yang, Shiyan, and Thomas K. Ferris. "Measuring Cognitive Efficiency of Novel Speedometer Displays." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 60, no. 1 (September 2016): 1941–45. http://dx.doi.org/10.1177/1541931213601442.

Full text
Abstract:
Cognitive efficiency has been defined as the ratio of display informativeness to mental workload required for processing information. In the study, we used Cognitive Efficiency (CE) metric to measure the efficiency of novel speedometer displays which were previously used to aid multitask performance in driving. The study also investigated whether encoding information into ‘beat pattern’, a redundant dimension, improves the cognitive efficiency of the displays. The level of cognitive efficiency was found to be similar among the ambient-visual, auditory, and tactile speedometer displays and it increased largely when information was encoded into a redundant beat pattern of the auditory display. In addition, the cognitive efficiency of novel speedometer display was sensitive to contextual factors such as wind effects. These findings enlarge our understanding of multifactor constructs of cognitive efficiency, which are expected to be more predictive of multitask performance than the measure of each of its constituent components.
APA, Harvard, Vancouver, ISO, and other styles
17

Strybel, Thomas Z. "Auditory Spatial Information and Head-Coupled Display Systems." Proceedings of the Human Factors Society Annual Meeting 32, no. 2 (October 1988): 75. http://dx.doi.org/10.1177/154193128803200215.

Full text
Abstract:
Developments of head-coupled control/display systems have focused primarily on the display of three dimensional visual information, as the visual system is the optimal sensory channel for the aquisition of spatial information in humans. The auditory system improves the efficiency of vision, however, by obtaining spatial information about relevant objects outside of the visual field of view. This auditory information can be used to direct head and eye movements. Head-coupled display systems, can also benefit from the addition of auditory spatial information, as it provides a natural method of signaling the location of important events outside of the visual field of view. This symposium will report on current efforts in the developments of head-coupled display systems, with an emphasis on the auditory spatial component. The first paper “Virtual Interface Environment Workstations”, by Scott S. Fisher, will report on the development of a prototype virtual environment. This environment consists of a head-mounted, wide-angle, stereoscopic display system which is controlled by operator position, voice, and gesture. With this interface, an operator can virtually explore a 360 degree synthesized environment, and viscerally interact with its components. The second paper, “A Virtual Display System For Conveying Three-Dimensional Acoustic Information” by Elizabeth M. Wenzel, Frederic L. Wightman and Scott H. Foster, will report on the development of a method of synthetically generating three-dimensional sound cues for the above-mentioned interface. The development of simulated auditory spatial cues is limited to some extent, by our knowlege of auditory spatial processing. The remaining papers will report on two areas of auditory space perception that have recieved little attention until recently. “Perception of Real and Simulated Motion in the Auditory Modality”, by Thomas Z. Strybel, will review recent research on auditory motion perception, because a natural acoustic environment must contain moving sounds. This review will consider applications of this knowledge to head-coupled display systems. The last paper, “Auditory Psychomotor Coordination”, will examine the interplay between the auditory, visual and motor systems. The specific emphasis of this paper is the use of auditory spatial information in the regulation of motor responses so as to provide efficient application of the visual channel.
APA, Harvard, Vancouver, ISO, and other styles
18

Barrass, Stephen. "A comprehensive framework for auditory display." ACM Transactions on Applied Perception 2, no. 4 (October 2005): 403–6. http://dx.doi.org/10.1145/1101530.1101533.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Stewart, Rebecca, and Mark Sandler. "An Auditory Display in Playlist Generation." IEEE Signal Processing Magazine 28, no. 4 (July 2011): 14–23. http://dx.doi.org/10.1109/msp.2011.940883.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Sturm, Bob L. "International Conference on Auditory Display 2002." Computer Music Journal 27, no. 1 (March 2003): 84–86. http://dx.doi.org/10.1162/comj.2003.27.1.84.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Thompson, Matthew B., and Penelope M. Sanderson. "Multisensory Integration with a Head-Mounted Display and Auditory Display." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 52, no. 18 (September 2008): 1292–96. http://dx.doi.org/10.1177/154193120805201829.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Collett, Renae, Isaac Salisbury, Robert G. Loeb, and Penelope M. Sanderson. "Smooth or Stepped? Laboratory Comparison of Enhanced Sonifications for Monitoring Patient Oxygen Saturation." Human Factors: The Journal of the Human Factors and Ergonomics Society 62, no. 1 (June 10, 2019): 124–37. http://dx.doi.org/10.1177/0018720819845742.

Full text
Abstract:
Background: The pulse oximeter (PO) provides anesthesiologists with continuous visual and auditory information about a patient’s oxygen saturation (SpO2). However, anesthesiologists’ attention is often diverted from visual displays, and clinicians may inaccurately judge SpO2 values when relying on conventional PO auditory tones. We tested whether participants could identify SpO2 value (e.g., “97%”) better with acoustic enhancements that identified three discrete clinical ranges by either changing abruptly at two threshold values (stepped-effects) or changing incrementally with each percentage value of SpO2 (smooth-effects). Method: In all, 79 nonclinicians participated in a between-subjects experiment that compared performance of participants using the stepped-effects display with those who used the smooth-effects display. In both conditions, participants heard sequences of 72 tones whose pitch directly correlated to SpO2 value, and whose value could change incrementally. Primary outcome was percentage of responses that correctly identified the absolute SpO2 percentage, ±1, of the last pulse tone in each sequence. Results: Participants using the stepped-effects auditory tones identified absolute SpO2 percentage more accurately ( M = 53.7%) than participants using the smooth-effects tones ( M = 47.9%, p = .038). Identification of range and detection of transitions between ranges showed even stronger advantages for the stepped-effects display ( p < .005). Conclusion: The stepped-effects display has more pronounced auditory cues at SpO2 range transitions, from which participants can better infer absolute SpO2 values. Further development of a smooth-effects display for this purpose is not necessary.
APA, Harvard, Vancouver, ISO, and other styles
23

Sorkin, Robert D., Donald E. Robinson, and Bruce G. Berg. "A Detection Theory Method for the Analysis of Visual and Auditory Displays." Proceedings of the Human Factors Society Annual Meeting 31, no. 11 (September 1987): 1184–88. http://dx.doi.org/10.1177/154193128703101101.

Full text
Abstract:
A signal detection method for evaluating different display codes and formats is described. The method allows one to determine how an observer aggregates information from multiple element displays. The method can be used to assess the relative importance of specific spatial or sequential elements of the display. The efficacy of different formats and arrangements thus can be compared. The paper describes the theoretical basis for the method and briefly summarizes data from several types of visual and auditory displays.
APA, Harvard, Vancouver, ISO, and other styles
24

Simpson, Brian D., Douglas S. Brungart, Ronald C. Dallman, Jacque Joffrion, Michael D. Presnar, and Robert H. Gilkey. "Spatial Audio as a Navigation Aid and Attitude Indicator." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 49, no. 17 (September 2005): 1602–6. http://dx.doi.org/10.1177/154193120504901722.

Full text
Abstract:
Most current display systems in general aviation (GA) environments employ, at best, relatively simple audio displays that do not fully exploit a pilot's auditory processing capabilities. Spatial audio displays, however, take advantage of the spatial processing capabilities of the auditory system and have the ability to provide, in an intuitive manner, comprehensive information about the status of an aircraft to the pilot. This paper describes a study conducted in order to assess the utility of spatial audio as (1) a navigation aid, and (2) an attitude indicator in an actual flight environment. Performance was measured in tasks requiring pilots to fly in the direction of a spatial audio “navigation beacon” and use an auditory artificial horizon display to detect changes in attitude and maintain straight and level flight when no visual cues were available. The results indicate that spatial audio displays can effectively be used by pilots for both navigation and attitude monitoring, and thus may be a valuable tool in supporting pilot situation awareness and improving overall safety in GA environments.
APA, Harvard, Vancouver, ISO, and other styles
25

Winters, R. Michael, Neel Joshi, Edward Cutrell, and Meredith Ringel Morris. "Strategies for Auditory Display of Social Media." Ergonomics in Design: The Quarterly of Human Factors Applications 27, no. 1 (November 2, 2018): 11–15. http://dx.doi.org/10.1177/1064804618788098.

Full text
Abstract:
Social media is an overwhelmingly visual medium, and we ask the simple question: How can the data and images of social media posts be transformed into something as meaningful and vivid in the auditory sense? Such a design would be useful for eyes-free browsing and could enhance the existing visual media. Our strategy first uses artificial intelligence systems to transform low-level input data into high-level sociocultural features. These features are then conveyed using a multifactored temporal design that uses speech, sonification, auditory scenes, and music.
APA, Harvard, Vancouver, ISO, and other styles
26

Omiya, Hidefumi, Akinori Komatsubara, and Shigeo Fujisaki. "Evaluation of Usability on Auditory Display Interface." Japanese journal of ergonomics 35, no. 1Supplement (1999): 145. http://dx.doi.org/10.5100/jje.35.1supplement_145.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Omiya, Hidefumi, Akinori Komatsubara, and Shigeo Fujisaki. "Evaluation of Usability on Auditory Display Interface." Japanese journal of ergonomics 35 (1999): 548–49. http://dx.doi.org/10.5100/jje.35.2supplement_548.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Hermann, T., and H. Ritter. "Sound and Meaning in Auditory Data Display." Proceedings of the IEEE 92, no. 4 (April 2004): 730–41. http://dx.doi.org/10.1109/jproc.2004.825904.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Zahorik, Pavel, and Zijiang J. He. "Virtual auditory display validation using transaural techniques." Journal of the Acoustical Society of America 137, no. 4 (April 2015): 2230. http://dx.doi.org/10.1121/1.4920136.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Bruce, Deborah, Deborah A. Boehm-Davis, and Karen Mahach. "In-Vehicle Auditory Display of Symbolic Information." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 44, no. 20 (July 2000): 3–230. http://dx.doi.org/10.1177/154193120004402001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Krishnan, Sridhar, Rangaraj M. Rangayyan, G. Douglas Bell, and Cyril B. Frank. "Auditory display of knee-joint vibration signals." Journal of the Acoustical Society of America 110, no. 6 (December 2001): 3292–304. http://dx.doi.org/10.1121/1.1413995.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Gonzalez, Jose, Hirokazu Soma, Masashi Sekine, and Wenwei Yu. "Auditory Display as a Prosthetic Hand Biofeedback." Journal of Medical Imaging and Health Informatics 1, no. 4 (December 1, 2011): 325–33. http://dx.doi.org/10.1166/jmihi.2011.1051.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Black, David, Sven Lilge, Carolin Fellmann, Anke V. Reinschluessel, Lars Kreuer, Arya Nabavi, Horst K. Hahn, Ron Kikinis, and Jessica Burgner-Kahrs. "Auditory Display for Telerobotic Transnasal Surgery Using a Continuum Robot." Journal of Medical Robotics Research 04, no. 02 (June 2019): 1950004. http://dx.doi.org/10.1142/s2424905x19500041.

Full text
Abstract:
Tubular continuum robots can follow complex curvilinear paths to reach restricted areas within the body. Using teleoperation, these robots can help minimize incisions and reduce trauma. However, drawbacks include the lack of haptic feedback and a limited view of the situs, often due to camera occlusion. This work presents novel auditory display to enhance interaction with such continuum robots to increase accuracy and path-following efficiency and reduce cognitive workload. We recreate a typical use case with a test environment that simulates a transnasal intervention through the sphenoidal sinus including a simulated continuum robot. Distance information is mapped to changes in a real-time audio synthesizer using sung voice to provide navigation cues. User studies with novice participants and clinicians were performed to evaluate the effects of auditory display on accuracy, task time, path following efficiency, subjective workload, and usability. When using auditory display, participants exhibit significant increase in accuracy, efficiency, and task time compared to visual-only display. Auditory display reduced subjective workload and raised usefulness and satisfaction ratings. The addition of auditory display for augmenting interaction with a teleoperated continuum robot has shown to benefit performance as well as usability. The method could benefit other scenarios in navigated surgery to increase accuracy and reduce workload.
APA, Harvard, Vancouver, ISO, and other styles
34

Jeon, Myounghoon. "Exploring Design Constructs In Sound Design With A Focus On Perceived Affordance." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 63, no. 1 (November 2019): 1199–203. http://dx.doi.org/10.1177/1071181319631340.

Full text
Abstract:
While design theories in visual displays have been well developed and further refined, relatively little research has been conducted on design theories and models in auditory displays. The existing discussions mainly account for functional mappings between sounds and referents, but these do not fully address design aspects of auditory displays. To bridge the gap, the present proposal focuses on design affordances in sound design among many design constructs. To this end, the definition and components of design affordances are briefly explored, followed by the auditory display examples of those components to gauge whether sound can deliver perceived affordances in interactive products. Finally, other design constructs, such as feedback and signifier, are discussed together with future work. This exploratory proposal is expected to contribute to elaborating sound design theory and practice.
APA, Harvard, Vancouver, ISO, and other styles
35

Gilkey, Robert H., Brian D. Simpson, Douglas S. Brungart, Jeffery L. Cowgill, and Adrienne Janae Ephrem. "3D Audio Display for Pararescue Jumpers." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 51, no. 19 (October 2007): 1349–52. http://dx.doi.org/10.1177/154193120705101916.

Full text
Abstract:
Visual and audio navigations aids were compared in a virtual environment that depicted an urban combat search and rescue mission (CSAR). The participants' task was to rapidly move through a virtual maze depicted in a CAVE° to find a downed pilot, while dealing with automated hostile and friendly characters. The visual and audio displays were designed to present comparable information, which in separate conditions could be a simple realtime indication of the bearing to the pilot or to intermediate waypoints along the way. Auditory displays led to faster response times than visual displays (p = .011) and the waypoint display led to faster response times than the simple bearing display (p = .002). The results are considered in the context of the target CSAR application.
APA, Harvard, Vancouver, ISO, and other styles
36

Cunio, Rachel J., David Dommett, and Joseph Houpt. "Spatial Auditory Cueing for a Dynamic Three-Dimensional Virtual Reality Visual Search Task." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 63, no. 1 (November 2019): 1766–70. http://dx.doi.org/10.1177/1071181319631045.

Full text
Abstract:
Maintaining spatial awareness is a primary concern for operators, but relying only on visual displays can cause visual system overload and lead to performance decrements. Our study examined the benefits of providing spatialized auditory cues for maintaining visual awareness as a method of combating visual system overload. We examined visual search performance of seven participants in an immersive, dynamic (moving), three-dimensional, virtual reality environment both with no cues, non-masked, spatialized auditory cues, and masked, spatialized auditory cues. Results indicated a significant reduction in visual search time from the no-cue condition when either auditory cue type was presented, with the masked auditory condition slower. The results of this study can inform attempts to improve visual search performance in operational environments, such as determining appropriate display types for providing spatial information.
APA, Harvard, Vancouver, ISO, and other styles
37

Ballas, James A. "The Niche Hypothesis: Implications for Auditory Display Design." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 44, no. 22 (July 2000): 718–21. http://dx.doi.org/10.1177/154193120004402258.

Full text
Abstract:
Bernie Krause has hypothesized that “each creature appears to have its own sonic niche (channel, or space) in the frequency spectrum and/or time slot occupied by no other at that particular moment.” (Krause, 1987). The implication of this hypothesis is that good sound design should produce sounds that have unique spectral properties for a particular context. The semantics of the context also needs to be considered. However, this principal is difficult to satisfy because inexpensive sound generating devices have very limited (and primitive) audio capability.
APA, Harvard, Vancouver, ISO, and other styles
38

SEKI, Yoshikazu, Yukio IWAYA, Takeru CHIBA, Satoshi YAIRI, Makoto OTANI, Makoto OH-UCHI, Tetsuya MUNEKATA, Kazutaka MITOBE, and Akio HONDA. "0733 Auditory Display and O&M Training." Proceedings of the Bioengineering Conference Annual Meeting of BED/JSME 2009.22 (2010): 305. http://dx.doi.org/10.1299/jsmebio.2009.22.305.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Kozlov, Andrei S., and Timothy Q. Gentner. "Central auditory neurons display flexible feature recombination functions." Journal of Neurophysiology 111, no. 6 (March 15, 2014): 1183–89. http://dx.doi.org/10.1152/jn.00637.2013.

Full text
Abstract:
Recognition of natural stimuli requires a combination of selectivity and invariance. Classical neurobiological models achieve selectivity and invariance, respectively, by assigning to each cortical neuron either a computation equivalent to the logical “AND” or a computation equivalent to the logical “OR.” One powerful OR-like operation is the MAX function, which computes the maximum over input activities. The MAX function is frequently employed in computer vision to achieve invariance and considered a key operation in visual cortex. Here we explore the computations for selectivity and invariance in the auditory system of a songbird, using natural stimuli. We ask two related questions: does the MAX operation exist in auditory system? Is it implemented by specialized “MAX” neurons, as assumed in vision? By analyzing responses of individual neurons to combinations of stimuli we systematically sample the space of implemented feature recombination functions. Although we frequently observe the MAX function, we show that the same neurons that implement it also readily implement other operations, including the AND-like response. We then show that sensory adaptation, a ubiquitous property of neural circuits, causes transitions between these operations in individual neurons, violating the fixed neuron-to-computation mapping posited in the state-of-the-art object-recognition models. These transitions, however, accord with predictions of neural-circuit models incorporating divisive normalization and variable polynomial nonlinearities at the spike threshold. Because these biophysical properties are not tied to a particular sensory modality but are generic, the flexible neuron-to-computation mapping demonstrated in this study in the auditory system is likely a general property.
APA, Harvard, Vancouver, ISO, and other styles
40

Richardson, Suzanne, and Charles Loeffler. "Three‐dimensional auditory display of passive sonar data." Journal of the Acoustical Society of America 105, no. 2 (February 1999): 1366. http://dx.doi.org/10.1121/1.426470.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Chan, A. H. S., K. W. L. Chan, and R. F. Yu. "Auditory stimulus-response compatibility and control-display design." Theoretical Issues in Ergonomics Science 8, no. 6 (November 2007): 557–81. http://dx.doi.org/10.1080/14639220500330455.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Cabrera, Densil, Sam Ferguson, and Robert Maria. "Sonification of sound: Auditory display for acoustics education." Journal of the Acoustical Society of America 120, no. 5 (November 2006): 3073. http://dx.doi.org/10.1121/1.4787387.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Wenzel, Elizabeth M., Frederic L. Wightman, and Scott H. Foster. "Development of a three-dimensional auditory display system." ACM SIGCHI Bulletin 20, no. 2 (October 1988): 52–57. http://dx.doi.org/10.1145/54386.54405.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Jeong, Wooseob, and Myke Gluck. "Multimodal bivariate thematic maps: Auditory and haptic display." Proceedings of the American Society for Information Science and Technology 39, no. 1 (January 31, 2005): 279–83. http://dx.doi.org/10.1002/meet.1450390130.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Ziemer, Tim, and Holger Schultheis. "Psychoacoustic auditory display for navigation: an auditory assistance system for spatial orientation tasks." Journal on Multimodal User Interfaces 13, no. 3 (November 24, 2018): 205–18. http://dx.doi.org/10.1007/s12193-018-0282-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Takahashi, Hidetomo, and Satoshi Kanai. "Experimental Assessment for Examination of Curves and Surfaces by Auditory Sense." Journal of Robotics and Mechatronics 9, no. 6 (December 20, 1997): 434–38. http://dx.doi.org/10.20965/jrm.1997.p0434.

Full text
Abstract:
The purpose of this research is to assess the ability of the auditory sense to examine curves and free-form surface models in order to verify their appearances. In this paper, the authors will first show how to display a curve and a surface model using an acoustic wave, i.e., the information about curves and surfaces that is needed, the kinds of acoustic waves that should be displayed, and how to relate the geometric information from curves or surfaces to the acoustic waves that are to be displayed. Next, the authors will discuss their newly developed experimental system. Finally, the ability of the auditory sense to examine curves and surfaces will be assessed experimentally. It will be demonstrated that the auditory sense can examine curves and surfaces.
APA, Harvard, Vancouver, ISO, and other styles
47

Castro-Astor, Ivandy N., Maria Alice S. Alves, and Roberto B. Cavalcanti. "Display Behavior and Spatial Distribution of the White-Crowned Manakin in the Atlantic Forest of Brazil." Condor 109, no. 1 (February 1, 2007): 155–66. http://dx.doi.org/10.1093/condor/109.1.155.

Full text
Abstract:
AbstractAbstractWe studied the display behavior and spatial distribution of the White-crowned Manakin (Dixiphia pipra, formerly in the genus Pipra) in the Atlantic Forest of the state of Rio de Janeiro, Brazil. The study area included three leks, two apparently solitary display sites, and two “collective display sites,” where definitive-plumaged males, predefinitive-plumaged males, and birds of undetermined sex displayed. The average distance between display sites was 68.0 ± 24.4 m (n = 8, range = 41–113 m). Males occupied the same display sites among years. The dispersion pattern of males was typical of exploded or dispersed leks. Males did not display in auditory or visual contact, except at the two display sites that were closest to each other. Lekking White-crowned Manakins used 11 display behaviors and two vocalizations. Four of the 11 display behaviors were recorded only at the collective display sites. We only observed males displaying in the presence of other individuals, regardless of whether it was a collective or solitary display site. Definitive- and predefinitive-plumaged males and birds of indeterminate sex all displayed together. The White-crowned Manakin repertoire of 11 display behaviors indicates a more complex display behavior than previously described.
APA, Harvard, Vancouver, ISO, and other styles
48

Kostek, Bożena. "Auditory Display Applied to Research in Music and Acoustics." Archives of Acoustics 39, no. 2 (March 1, 2015): 203–14. http://dx.doi.org/10.2478/aoa-2014-0025.

Full text
Abstract:
Abstract This paper presents a relationship between Auditory Display (AD) and the domains of music and acoustics. First, some basic notions of the Auditory Display area are shortly outlined. Then, the research trends and system solutions within the fields of music technology, music information retrieval and music recommendation and acoustics that are within the scope of AD are discussed. Finally, an example of AD solution based on gaze tracking that may facilitate music annotation process is shown. The paper concludes with a few remarks about directions for further research in the domains discussed.
APA, Harvard, Vancouver, ISO, and other styles
49

McMullen, Kyla, and Yunhao Wan. "A machine learning tutorial for spatial auditory display using head-related transfer functions." Journal of the Acoustical Society of America 151, no. 2 (February 2022): 1277–93. http://dx.doi.org/10.1121/10.0007486.

Full text
Abstract:
This review presents a high-level overview of the uses of machine learning (ML) to address several challenges in spatial auditory display research, primarily using head-related transfer functions. This survey also reviews and compares several categories of ML techniques and their application to virtual auditory reality research. This work addresses the use of ML techniques such as dimensionality reduction, unsupervised learning, supervised learning, reinforcement learning, and deep learning algorithms. The paper concludes with a discussion of the usage of ML algorithms to address specific spatial auditory display research challenges.
APA, Harvard, Vancouver, ISO, and other styles
50

Brungart, Douglas S. "Near-Field Virtual Audio Displays." Presence: Teleoperators and Virtual Environments 11, no. 1 (February 2002): 93–106. http://dx.doi.org/10.1162/105474602317343686.

Full text
Abstract:
Although virtual audio displays are capable of realistically simulating relatively distant sound sources, they are not yet able to accurately reproduce the spatial auditory cues that occur when sound sources are located near the listener's head. Researchers have long recognized that the binaural difference cues that dominate auditory localization are independent of distance beyond 1 m but change systematically with distance when the source approaches with in 1 m of the listener's head. Recent research has shown that listeners are able to use these binaural cues to determine the distances of nearby sound sources. However, technical challenges in the collection and processing of near-field head-related transfer functions (HRTFs) have thus far prevented the construction of a fully functional near-field audio display. This paper summarizes the current state of research in the localization of nearby sound sources and outlines the technical challenges involved in the creation of a near-field virtual audio display. The potential applications of near-field displays in immersive virtual environments and multimodal interfaces are also discussed.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography