Academic literature on the topic 'Non-visual modalities'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Non-visual modalities.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Non-visual modalities"

1

Scholtz, Desiree. "Visual and non-literal representations as academic literacy modalities." Southern African Linguistics and Applied Language Studies 37, no. 2 (September 6, 2019): 105–18. http://dx.doi.org/10.2989/16073614.2019.1617173.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Spiller, Mary Jane. "Exploring synaesthetes’ mental imagery abilities across multiple sensory modalities." Seeing and Perceiving 25 (2012): 219. http://dx.doi.org/10.1163/187847612x648459.

Full text
Abstract:
Previous research on the mental imagery abilities of synaesthetes has concentrated on visual and spatial imagery in synaesthetes with spatial forms (Price, 2009, 2010; Simner et al., 2008) and letter-colour synaesthesia (Spiller and Jansari, 2008). Though Barnett and Newell (2008) asked synaesthetes of all types to fill out a questionnaire on visual imagery, most of their synaesthetes reported some form of linguistic–colour synaesthesia. We extend the investigation of mental imagery to a wider variety of synaesthesia types and a wider variety of sensory modalities using a questionnaire study and several tests of visual and auditory mental imagery ability. Our results indicate that, as a group, synaesthetes report making greater use of mental imagery than non-synaesthetes, in everyday activities. Furthermore, they self-report greater vividness of visual, auditory, tactile, and taste imagery than do non-synaesthetes. However, as a group the synaesthetes are not seen to do significantly better at the mental imagery tasks, in either the visual or auditory modalities. These results have important implications for our understanding of synaesthesia, in relation to potential fundamental differences in perceptual processing of synaesthetes and non-synaesthetes.
APA, Harvard, Vancouver, ISO, and other styles
3

Dowlatshahi, K., and J. Dieschbourg. "Shift in the surgical treatment of non-palpable breast cancer: tactile to visual." Breast Cancer Online 9, no. 1 (January 2006): 1–10. http://dx.doi.org/10.1017/s1470903105003755.

Full text
Abstract:
Increasing number of small, early-staged breast cancers are detected by screening mammography. Diagnosis and determination of the prognostic factors may be made by either ultrasound (US) or stereotactically guided needle biopsy. Approximately 2000 stereotactic tables are installed at various medical centers throughout the United States and a significant number in other countries where breast cancer is common. Many surgeons and interventional radiologists are trained in the use of this technology for diagnostic purposes. Employing the same technology, these physicians may be trained to treat selected breast cancers with laser energy percutaneously. Experimental and clinical reports to-date indicate the technique to be safe. High-resolution imaging modalities including grayscale and color Doppler US, magnetic resonance imaging, mammography and needle biopsy, when necessary, will confirm the tumor kill. Newer imaging modalities such as magnetic resonance spectroscopy may also provide additional confirmation for total tumor ablation.
APA, Harvard, Vancouver, ISO, and other styles
4

Brodoehl, Stefan, Carsten Klingner, Denise Schaller, and Otto W. Witte. "Plasticity During Short-Term Visual Deprivation." Zeitschrift für Psychologie 224, no. 2 (April 2016): 125–32. http://dx.doi.org/10.1027/2151-2604/a000246.

Full text
Abstract:
Abstract. During everyday experiences, people sometimes close their eyes to better understand spoken words, to listen to music, or when touching textures and objects. A plausible explanation for this observation is that a reversible loss of vision changes the perceptual function of the remaining non-deprived sensory modalities. Within this work, we discuss general aspects of the effects of visual deprivation on the perceptual performance of the non-deprived sensory modalities with a focus on the time dependency of these modifications. In light of ambiguous findings concerning the effects of short-term visual deprivation and because recent literature provides evidence that the act of blindfolding can change the function of the non-deprived senses within seconds, we performed additional psychophysiological and functional magnetic resonance imaging (fMRI) analysis to provide new insight into this matter. Eye closure for several seconds led to a substantial impact on tactile perception probably caused by an unmasking of preformed neuronal pathways.
APA, Harvard, Vancouver, ISO, and other styles
5

Tobin, Michael, Nicholas Bozic, Graeme Douglas, and John Greaney. "How non-visual modalities can help the young visually impaired child to succeed in visual and other tasks." British Journal of Visual Impairment 14, no. 1 (January 1996): 11–17. http://dx.doi.org/10.1177/026461969601400103.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Chenu, Olivier, Yohan Payan, P. Hlavackova, Jacques Demongeot, Francis Cannard, Bruno Diot, and Nicolas Vuillerme. "Pressure Sores Prevention for Paraplegic People: Effects of Visual, Auditory and Tactile Supplementations on Overpressures Distribution in Seated Posture." Applied Bionics and Biomechanics 9, no. 1 (2012): 61–67. http://dx.doi.org/10.1155/2012/961524.

Full text
Abstract:
This paper presents a study on the usage of different informative modalities as biofeedbacks of a perceptual supplementation device aiming at reducing overpressure at the buttock area. Visual, audio and lingual electrotactile modalities are analysed and compared with a non-biofeedback session. In conclusion, sensory modalities have a positive and equal effect, but they are not equally judged in terms of comfort and disturbance with some other activities.
APA, Harvard, Vancouver, ISO, and other styles
7

Pereira Reyes, Yasna, and Valerie Hazan. "English vowel perception by non-native speakers: impact of audio and visual training modalities." Onomázein Revista de lingüística filología y traducción, no. 51 (2021): 111–36. http://dx.doi.org/10.7764/onomazein.51.04.

Full text
Abstract:
Perception of sounds of a second language (L2) presents difficulties for non-native speakers which can be improved with training (Bradlow, Pisoni, Akahane-Yamada & Tohkura, 1997; Logan, Lively & Pisoni, 1991; Iverson & Evans, 2009). The aim of this study was to compare three different English vowel perceptual training programmes using audio (A), audiovisual (AV) and video (V) modes in non-native speakers with Spanish as native language (L1). 47 learners of English with Spanish as L1 were allocated to three different vowel training groups (AT, AVT, VT) and were given five training sessions to assess their improvement in English vowel perception. Additionally, participants were recorded before and after training to measure their improvement in the production of English vowels. Results showed that participants improved their perception and production of English vowels regardless of their training modality with no evidence of a benefit of visual information. These results also suggest that there is a lot of individual differences in perception and production of L2 vowels which may be related to a complex relation between speech perceptual and production mechanisms.
APA, Harvard, Vancouver, ISO, and other styles
8

Zhang, Tao, Yang Cong, Gan Sun, Qianqian Wang, and Zhenming Ding. "Visual Tactile Fusion Object Clustering." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 06 (April 3, 2020): 10426–33. http://dx.doi.org/10.1609/aaai.v34i06.6612.

Full text
Abstract:
Object clustering, aiming at grouping similar objects into one cluster with an unsupervised strategy, has been extensively-studied among various data-driven applications. However, most existing state-of-the-art object clustering methods (e.g., single-view or multi-view clustering methods) only explore visual information, while ignoring one of most important sensing modalities, i.e., tactile information which can help capture different object properties and further boost the performance of object clustering task. To effectively benefit both visual and tactile modalities for object clustering, in this paper, we propose a deep Auto-Encoder-like Non-negative Matrix Factorization framework for visual-tactile fusion clustering. Specifically, deep matrix factorization constrained by an under-complete Auto-Encoder-like architecture is employed to jointly learn hierarchical expression of visual-tactile fusion data, and preserve the local structure of data generating distribution of visual and tactile modalities. Meanwhile, a graph regularizer is introduced to capture the intrinsic relations of data samples within each modality. Furthermore, we propose a modality-level consensus regularizer to effectively align the visual and tactile data in a common subspace in which the gap between visual and tactile data is mitigated. For the model optimization, we present an efficient alternating minimization strategy to solve our proposed model. Finally, we conduct extensive experiments on public datasets to verify the effectiveness of our framework.
APA, Harvard, Vancouver, ISO, and other styles
9

Bautista, Melissa, Nayyar Saleem, and Ian A. Anderson. "Current and novel non-invasive imaging modalities in vascular neurosurgical practice." British Journal of Hospital Medicine 81, no. 12 (December 2, 2020): 1–10. http://dx.doi.org/10.12968/hmed.2020.0550.

Full text
Abstract:
Radiological investigations are a powerful tool in the assessment of patients with intracranial vascular anomalies. ‘Visual’ assessment of neurovascular lesions is central to their diagnosis, monitoring, prognostication and management. Computed tomography and magnetic resonance imaging are the two principal non-invasive imaging modalities used in clinical practice for the assessment of the cerebral vasculature, but these techniques continue to evolve, enabling clinicians to gain greater insights into neurovascular pathology and pathophysiology. This review outlines both established and novel imaging modalities used in modern neurovascular practice and their clinical applications.
APA, Harvard, Vancouver, ISO, and other styles
10

Gonsior, Barbara, Christian Landsiedel, Nicole Mirnig, Stefan Sosnowski, Ewald Strasser, Jakub Złotowski, Martin Buss, et al. "Impacts of Multimodal Feedback on Efficiency of Proactive Information Retrieval from Task-Related HRI." Journal of Advanced Computational Intelligence and Intelligent Informatics 16, no. 2 (March 20, 2012): 313–26. http://dx.doi.org/10.20965/jaciii.2012.p0313.

Full text
Abstract:
This work is a first step towards an integration ofmultimodality with the aim to make efficient use of both human-like, and non-human-like feedback modalities in order to optimize proactive information retrieval from task-related Human-Robot Interaction (HRI) in human environments. The presented approach combines the human-like modalities speech and emotional facial mimicry with non-human-like modalities. The proposed non-human-like modalities are a screen displaying retrieved knowledge of the robot to the human and a pointer mounted above the robot head for pointing directions and referring to objects in shared visual space as an equivalent for arm and hand gestures. Initially, pre-interaction feedback is explored in an experiment investigating different approach behaviors in order to find socially acceptable trajectories to increase the success of interactions and thus efficiency of information retrieval. Secondly, pre-evaluated humanlike modalities are introduced. First results of a multimodal feedback study are presented in the context of the IURO project,1where a robot asks for its way to a predefined goal location.1. Interactive Urban Robot, http://www.iuro-project.eu
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Non-visual modalities"

1

Wierzba, Weronika. "Beyond the screen. : Exploring the usability of non-visual modalities in cross-device systems." Thesis, Malmö universitet, Institutionen för konst, kultur och kommunikation (K3), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-43660.

Full text
Abstract:
This thesis explores how non-visual modalities, especially gestures, can be utilized to enhance User Experience, taking as a probe an existing multi-screen, cross-device system.  In the first Chapters of the thesis a theory on cross-device systems is  being reviewed for the existing design frameworks, principles and practices, and a case study to further investigation of challenges and issues occurring in a chosen cross-device system is conducted.  As a conclusion of both theoretical and empirical research, the pivotal change in the design approach is made. The design opportunity focuses on exploring non-visual modalities in the context of the above-mentioned cross-device system.  As a result of design activities, especially the co-creation session, gesture taxonomy is proposed. Gestures are described and documented in order to contribute to the field of HCI and to become an inspiration for designers aiming to design for cross-device systems beyond the screens.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Non-visual modalities"

1

Sicari, Rosa, Edyta Płońska-Gościniak, and Jorge Lowenstein. Stress echocardiography: image acquisition and modalities. Oxford University Press, 2016. http://dx.doi.org/10.1093/med/9780198726012.003.0013.

Full text
Abstract:
Stress echocardiography has evolved over the last 30 years but image interpretation remains subjective and burdened by the operator’s experience. The objective operator-independent assessment of myocardial ischaemia during stress echocardiography remains a technological challenge. Still, adequate quality of two-dimensional images remains a prerequisite to successful quantitative analysis, even using Doppler and non-Doppler based techniques. No new technology has proved to have a higher diagnostic accuracy than conventional visual wall motion analysis. Tissue Doppler imaging and derivatives may reduce inter-observer variability, but still require a dedicated learning curve and special expertise. The development of contrast media in echocardiography has been slow. In the past decade, transpulmonary contrast agents have become commercially available for clinical use. The approved indication for the use of contrast echocardiography currently lies in improving endocardial border delineation in patients in whom adequate imaging is difficult or suboptimal. Real-time three-dimensional echocardiography is potentially useful but limited by low spatial and temporal resolution. It is possible that these technologies may serve as an adjunct to expert visual assessment of wall motion. At present, these quantitative methods require further validation and simplification of analysis techniques.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Non-visual modalities"

1

Eulitz, C., B. Maess, C. Pantev, A. Friederici, B. Feige, and T. Elbert. "Oscillatory Neuromagnetic Activity Induced by Verbal and Non-Verbal Stimuli Presented in Visual and Auditory Modalities." In Biomag 96, 931–34. New York, NY: Springer New York, 2000. http://dx.doi.org/10.1007/978-1-4612-1260-7_227.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Koutny, Reinhard, Sebastian Günther, Naina Dhingra, Andreas Kunz, Klaus Miesenberger, and Max Mühlhäuser. "Accessible Multimodal Tool Support for Brainstorming Meetings." In Lecture Notes in Computer Science, 11–20. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58805-2_2.

Full text
Abstract:
AbstractIn recent years, assistive technology and digital accessibility for blind and visually impaired people (BVIP) has been significantly improved. Yet, group discussions, especially in a business context, are still challenging as non-verbal communication (NVC) is often depicted on digital whiteboards, including deictic gestures paired with visual artifacts. However, as NVC heavily relies on the visual perception, whichrepresents a large amount of detail, an adaptive approach is required that identifies the most relevant information for BVIP. Additionally, visual artifacts usually rely on spatial properties such as position, orientation, and dimensions to convey essential information such as hierarchy, cohesion, and importance that is often not accessible to the BVIP. In this paper, we investigate the requirements of BVIP during brainstorming sessions and, based on our findings, provide an accessible multimodal tool that uses non-verbal and spatial cues as an additional layer of information. Further, we contribute by presenting a set of input and output modalities that encode and decode information with respect to the individual demands of BVIP and the requirements of different use cases.
APA, Harvard, Vancouver, ISO, and other styles
3

Abu Doush, Iyad, and Enrico Pontelli. "Building a Programmable Architecture for Non-visual Navigation of Mathematics: Using Rules for Guiding Presentation and Switching between Modalities." In Lecture Notes in Computer Science, 3–13. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-02713-0_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Kühle, Lana. "The Emotional Dimension to Sensory Perception." In The Epistemology of Non-Visual Perception, 236–55. Oxford University Press, 2020. http://dx.doi.org/10.1093/oso/9780190648916.003.0011.

Full text
Abstract:
This chapter considers how we might understand the effect that emotions have on the justification of our perceptual beliefs about the world, beliefs that we acquire from a variety of sensory modalities—audition, gustation, olfaction, and so on. The chapter takes the problem to be associated with one of two forms of perceptual influence: penetration or multisensory integration. In any given perceptual moment there are multiple sensory modalities and mental states at play, each affecting the overall experience. Whether we understand the influence of emotion on perception as a form of non-perceptual penetration or a form of non-visual sensory perception of the inner body—interoception—the potential epistemological difficulties remain: How can we be said to acquire justified beliefs and knowledge on the basis of such influenced perceptual experience? As has been the norm, only the five exteroceptive senses of vision, audition, olfaction, taste, and touch are typically discussed in the context of sensory perception. However, as this chapter argues, there is strong reason to accept the claim that emotional experience is a form of interoception, and that interoception ought to be considered when discussing sensory perception. In this way, then, the chapter proposes that clarifying the role played by interoception in sense perception across modalities is necessary if we are to make progress on the epistemological problems at hand.
APA, Harvard, Vancouver, ISO, and other styles
5

Matey, Jennifer. "The Perception of Virtue." In The Epistemology of Non-Visual Perception, 256–72. Oxford University Press, 2020. http://dx.doi.org/10.1093/oso/9780190648916.003.0012.

Full text
Abstract:
It is not uncommon for people to like what they take as morally good. And often these feelings of esteem for virtue come prior to conscious cognitive appraisals about character. This chapter outlines a framework for understanding some emotional responses of esteem to perceived good character as representing the character traits as valuable, and hence, as virtues. It is proposed that these esteeming experiences are analogous to perceptual representations in other modalities in their epistemic role as causing, providing content for, and in justifying beliefs regarding the value of the traits they represent. The role of the perceiver’s own character in their ability to recognize and respond appropriately to virtue in others is also discussed. It is shown that moral virtues can also be epistemic virtues when it comes to facilitating knowledge about the character of people we encounter.
APA, Harvard, Vancouver, ISO, and other styles
6

Mourya, Gajendra Kumar, Dinesh Bhatia, and Akash Handique. "Segmentation of Liver From 3D Medical Imaging Dataset for Diagnosis and Treatment Planning of Liver Disorders." In Advances in Medical Technologies and Clinical Practice, 191–217. IGI Global, 2018. http://dx.doi.org/10.4018/978-1-5225-4969-7.ch009.

Full text
Abstract:
CT- and MRI-based imaging modalities are non-invasive, fast, and accurate in the diagnosis of different anatomical and pathological disorders. As such, there is a pertinent requirement for segmentation of the large organs such as liver and lungs to give proper visual information on the extent of involvement of morphological and pathological changes. The aim of this chapter is to discuss and implement different liver segmentation techniques on the 3D medical data set to determine best feasible technique for the purpose. The localization and detection of liver tumor will be easier for a radiologist with the extraction of the liver margins from other adjoining organs and its localization within the anatomical segments. It is found that active contour technique provides satisfactory results and also the validation results are well outlined in the case of active contour techniques.
APA, Harvard, Vancouver, ISO, and other styles
7

Gupta, Yogesh Kumar. "Evolution of Big Data in Medical Imaging Modalities to Extract Features Using Region Growing Segmentation, GLCM, and Discrete Wavelet Transform." In Advancements in Security and Privacy Initiatives for Multimedia Images, 41–78. IGI Global, 2021. http://dx.doi.org/10.4018/978-1-7998-2795-5.ch003.

Full text
Abstract:
Big data refers to the massive amount of data from sundry sources (gregarious media, healthcare, different sensor, etc.) with very high velocity. Due to expeditious growth, the multimedia or image data has rapidly incremented due to the expansion of convivial networking, surveillance cameras, satellite images, and medical images. Healthcare is the most promising area where big data can be applied to make a vicissitude in human life. The process for analyzing the intricate data is mundanely concerned with the disclosing of hidden patterns. In healthcare fields capturing the visual context of any medical images, extraction is a well introduced word in digital image processing. The motive of this research is to present a detailed overview of big data in healthcare and processing of non-invasive medical images with the avail of feature extraction techniques such as region growing segmentation, GLCM, and discrete wavelet transform.
APA, Harvard, Vancouver, ISO, and other styles
8

"Getting Visually Acquainted With a Learning Discipline, Related Professions, and an Overarching Domain." In Visual Approaches to Instructional Design, Development, and Deployment, 1–14. IGI Global, 2021. http://dx.doi.org/10.4018/978-1-7998-3946-0.ch001.

Full text
Abstract:
Instructional designers create learning about subjects about which they are non-experts and outsiders. Using formal and informal visuals to learn about a field will shed light on aspects of a discipline and its related professions and overarching domain. The imagery may offer a path to learning that may be more accessible than through other modalities at least initially, and these images may be a gateway to further research and learning (textually). Here, using visuals is shown to complement other modes of learning about a field.
APA, Harvard, Vancouver, ISO, and other styles
9

Muhafiz, Ersin. "Advances in Non-surgical Treatment Methods in Vision Rehabilitation of Keratoconus Patients." In Eyesight and Medical Image Cognition - Recent Advances and New Perspectives [Working Title]. IntechOpen, 2021. http://dx.doi.org/10.5772/intechopen.94250.

Full text
Abstract:
Visual acuity decreases due to progressive irregular astigmatism in keratoconus (KC). Although glasses can be useful in the initial stages of vision rehabilitation, contact lenses (CL) are needed in many patients due to irregular astigmatism. Although rigid gas permeable (RGP) CLs provided the patient with a better visual acuity than glasses, their effects on corneal tissues and caused comfort problems. Although soft CL produced for KC have solved some of these problems, they could not increase visual acuity as much as RGPs in advanced stage KC. For this reason, new searches for vision rehabilitation and comfort in KC have continued. In this context, piggyback contact lenses (PBCL) have been used in vision rehabilitation. Hybrid CLs have gained popularity due to the fact that PBCLs cause corneal neovascularization and giant papillary conjunctivitis. Scleral CLs have been developed for limited benefit in some patients with advanced KC. Scleral CLs provided good vision rehabilitation. The biggest problem of scleral CLs is the application and removal difficulty. All these CL modalities try to improve the quality of life and delay surgical procedures by increasing the level of vision in patients with KC.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Non-visual modalities"

1

Mandeljc, Rok, Janez Pers, Matej Kristan, and Stanislav Kovacic. "Fusion of non-visual modalities into the Probabilistic Occupancy Map framework for person localization." In 2011 Fifth ACM/IEEE International Conference on Distributed Smart Cameras (ICDSC). IEEE, 2011. http://dx.doi.org/10.1109/icdsc.2011.6042937.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Gkoumas, Dimitris, Qiuchi Li, Yijun Yu, and Dawei Song. "An Entanglement-driven Fusion Neural Network for Video Sentiment Analysis." In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/239.

Full text
Abstract:
Video data is multimodal in its nature, where an utterance can involve linguistic, visual and acoustic information. Therefore, a key challenge for video sentiment analysis is how to combine different modalities for sentiment recognition effectively. The latest neural network approaches achieve state-of-the-art performance, but they neglect to a large degree of how humans understand and reason about sentiment states. By contrast, recent advances in quantum probabilistic neural models have achieved comparable performance to the state-of-the-art, yet with better transparency and increased level of interpretability. However, the existing quantum-inspired models treat quantum states as either a classical mixture or as a separable tensor product across modalities, without triggering their interactions in a way that they are correlated or non-separable (i.e., entangled). This means that the current models have not fully exploited the expressive power of quantum probabilities. To fill this gap, we propose a transparent quantum probabilistic neural model. The model induces different modalities to interact in such a way that they may not be separable, encoding crossmodal information in the form of non-classical correlations. Comprehensive evaluation on two benchmarking datasets for video sentiment analysis shows that the model achieves significant performance improvement. We also show that the degree of non-separability between modalities optimizes the post-hoc interpretability.
APA, Harvard, Vancouver, ISO, and other styles
3

Morrison, Robert, Thomas Lord, Emily Esko, Lauren Gillmeister, Christine Kazlauskas, Derek Kamper, and Jennifer Kang-Mieler. "Design of a Novel Electronic Travel Aid to Assist Visually Impaired Individuals Navigate Their Environment." In ASME 2012 Summer Bioengineering Conference. American Society of Mechanical Engineers, 2012. http://dx.doi.org/10.1115/sbc2012-80352.

Full text
Abstract:
Worldwide there are over 160 million people with severe visual impairment, as defined by visual acuity poorer than 20/200.1 A prominent concern for visually impaired individuals is their limited navigational abilities due to insufficient sensory information about their surrounding environment which results in difficulty with navigating new or complex environments. In these situations, they often have to rely on the assistance of others to help them reach their destination. Furthermore, even when the visually impaired individuals are familiar with the area, they are not always aware of non-stationary obstacles, such as cars or people. Two commonly used solutions currently available to help visually impaired individuals navigate their surroundings are the white cane and guide dogs. The white cane is useful for alerting its users to obstacles closer than 1.5 m, but it does not provide any information about the environment beyond that scope. Guide dogs are in unfortunately limited supply and can cost upwards of $42,000 to train.2 To address this challenge, multiple groups have examined more technologically advanced solutions to help visually impaired individuals. However, these devices have some major limitations, such as complicated display modalities and non-intuitive sensory representation of environmental information. The major goal of this proposal is to develop a new electronic travel aid (ETA) that will help visually impaired individuals navigate their environment more easily by using a novel method of directly displaying the location of obstacles up to 4 m away on the user’s torso with a grid of small vibrational devices called tactors. This device is intended to be used with a traditional white cane that can detect objects very close to the user and terrain changes, such as a step in a stairwell.
APA, Harvard, Vancouver, ISO, and other styles
4

Marchelli, Grant L. S., David R. Haynor, William R. Ledoux, Mark A. Ganter, and Duane W. Storti. "Graphical User Interface for Human Intervention in 2D-3D Registration of Medical Images." In ASME 2013 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2013. http://dx.doi.org/10.1115/detc2013-13659.

Full text
Abstract:
Image-guided medical therapies and image-guided biomechanical measurement systems often combine 2D and 3D imaging modalities. Determination of relations between the 2D and 3D imaging data is known as 2D-3D registration. Motivated by an ongoing project aimed at non-invasive, marker-free measurement of the kinematics of the bones in the foot during gait, we consider a registration approach that involves (1) computing projections of the 3D data set, (2) computing a quality measure to describe the agreement/discrepancy between the simulated projections and actual 2D images, and (3) optimization of the quality measure relative to the kinematic degrees of freedom to determine the optimal registration. For our particular project, the 3D imaging modality is CT scan, the 2D modality is bi-plane fluoroscopy, the computed projection is a digitally reconstructed radiograph (DRR), the quality measure is normalized cross-correlation (NCC) between a pair of DRRs and a pair of corresponding fluoroscope images, and the 2D imaging includes a sequence of several hundred stereo image pairs. We have recently released a software toolkit, DRRACC, that accelerates both the DRR and NCC computations via GPU-based parallel processing to enable more efficient automated determination of kinematic relations for optimal registration. While fully automated 2D-3D registration is desirable, there are situations (such as creating a reasonable starting configuration for optimization, re-starting after the optimizer fails to converge, and visual verification of registration relations) when it is desirable/necessary to have a human in the loop. In this paper, we present an OpenGL-based graphical user interface that employs the DRRACC toolkit to allow the user to manipulate the kinematics of individual objects (bones) segmented from the 3D imaging and to view the corresponding DRR and the associated correlation with a reference image in real time. We also present plots showing initial results for the dependence of the registration measure on pairs of kinematic parameters. The plots show well-defined peaks that support the hope for automated registration, but they also contain large relatively flat regions that may prove problematic for gradient-based optimizers and necessitate the sort of interface presented in this paper.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography