Journal articles on the topic 'Spatial body representation'

To see the other types of publications on this topic, follow the link: Spatial body representation.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Spatial body representation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Tolja, Jader, and Clara Cardia. "Body organisation and spatial representation." Cognitive Processing 7, S1 (August 9, 2006): 95. http://dx.doi.org/10.1007/s10339-006-0084-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Press, Clare, Marisa Taylor-Clarke, Steffan Kennett, and Patrick Haggard. "Visual enhancement of touch in spatial body representation." Experimental Brain Research 154, no. 2 (January 1, 2004): 238–45. http://dx.doi.org/10.1007/s00221-003-1651-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Samad, Majed, and Ladan Shams. "Recalibrating the body: visuotactile ventriloquism aftereffect." PeerJ 6 (March 15, 2018): e4504. http://dx.doi.org/10.7717/peerj.4504.

Full text
Abstract:
Visuotactile ventriloquism is a recently reported effect showing that somatotopic tactile representations (namely, representation of location along the surface of one’s arm) can be biased by simultaneous presentation of a visual stimulus in a spatial localization task along the surface of the skin. Here we investigated whether the exposure to discrepancy between tactile and visual stimuli on the skin can induce lasting changes in the somatotopic representations of space. We conducted an experiment investigating this question by asking participants to perform a localization task that included unisensory and bisensory trials, before and after exposure to spatially discrepant visuotactile stimuli. Participants localized brief flashes of light and brief vibrations that were presented along the surface of their forearms, and were presented either individually (unisensory conditions) or were presented simultaneously at the same location or different locations. We then compared the localization of tactile stimuli in unisensory tactile conditions before and after the exposure to discrepant bisensory stimuli. After exposure, participants exhibited a shift in their tactile localizations in the direction of the visual stimulus that was presented during the exposure block. These results demonstrate that the somatotopic spatial representations are capable of rapidly recalibrating after a very brief exposure to visually discrepant stimuli.
APA, Harvard, Vancouver, ISO, and other styles
4

Struiksma, Marijn E., Matthijs L. Noordzij, and Albert Postma. "Embodied representation of the body contains veridical spatial information." Quarterly Journal of Experimental Psychology 64, no. 6 (June 2011): 1124–37. http://dx.doi.org/10.1080/17470218.2011.552982.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Haggard, Patrick, Gian Domenico Iannetti, and Matthew R. Longo. "Spatial Sensory Organization and Body Representation in Pain Perception." Current Biology 23, no. 4 (February 2013): R164—R176. http://dx.doi.org/10.1016/j.cub.2013.01.047.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Cocchini, Gianna, Toni Galligan, Laura Mora, and Gustav Kuhn. "The magic hand: Plasticity of mental hand representation." Quarterly Journal of Experimental Psychology 71, no. 11 (January 1, 2018): 2314–24. http://dx.doi.org/10.1177/1747021817741606.

Full text
Abstract:
Internal spatial body configurations are crucial to successfully interact with the environment and to experience our body as a three-dimensional volumetric entity. These representations are highly malleable and are modulated by a multitude of afferent and motor information. Despite some studies reporting the impact of sensory and motor modulation on body representations, the long-term relationship between sensory information and mental representation of own body parts is still unclear. We investigated hand representation in a group of expert sleight-of-hand magicians and in a group of age-matched adults naïve to magic (controls). Participants were asked to localise landmarks of their fingers when their hand position was congruent with the mental representation (Experiment 1) and when proprioceptive information was “misleading” (Experiment 2). Magicians outperformed controls in both experiments, suggesting that extensive training in sleight of hand has a profound effect in refining hand representation. Moreover, the impact of training seems to have a high body-part specificity, with a maximum impact for those body sections used more prominently during the training. Interestingly, it seems that sleight-of-hand training can lead to a specific improvement of hand mental representation, which relies less on proprioceptive information.
APA, Harvard, Vancouver, ISO, and other styles
7

Medina, Jared, Shaan Khurshid, Roy H. Hamilton, and H. Branch Coslett. "Examining tactile spatial remapping using transcranial magnetic stimulation." Seeing and Perceiving 25 (2012): 143. http://dx.doi.org/10.1163/187847612x647757.

Full text
Abstract:
Previous research has provided evidence for two stages of tactile processing (e.g., Azañon and Soto-Faraco, 2008; Groh and Sparks, 1996). First, tactile stimuli are represented in a somatotopic representation that does not take into account body position in space, followed by a representation of body position in external space (body posture representation, see Medina and Coslett, 2010). In order to explore potential functional and neural dissociations between these two stages of processing, we presented eight participants with TMS before and after a tactile temporal order judgment (TOJ) task (see Yamamoto and Kitazawa, 2001). Participants were tested with their hands crossed and uncrossed before and after 20 min of 1 Hz repetitive TMS (rTMS). Stimulation occurred at the left anterior intraparietal sulcus (aIPS, somatotopic representation) or left Brodmann Area 5 (BA5, body posture) during two separate sessions. We predicted that left aIPS TMS would affect a somatotopic representation of the body, and would disrupt performance in both the uncrossed and crossed conditions. However, we predicted that TMS of body posture areas (BA5) would disrupt mechanisms for updating limb position with the hands crossed, resulting in a paradoxical improvement in performance after TMS. Using thresholds derived from adaptive staircase procedures, we found that left aIPS TMS disrupted performance in the uncrossed condition. However, left BA5 TMS resulted in a significant improvement in performance with the hands crossed. We discuss these results with reference to potential dissociations of the traditional body schema.
APA, Harvard, Vancouver, ISO, and other styles
8

Long, Xiaoyang, and Sheng-Jia Zhang. "A novel somatosensory spatial navigation system outside the hippocampal formation." Cell Research 31, no. 6 (January 18, 2021): 649–63. http://dx.doi.org/10.1038/s41422-020-00448-8.

Full text
Abstract:
AbstractSpatially selective firing of place cells, grid cells, boundary vector/border cells and head direction cells constitutes the basic building blocks of a canonical spatial navigation system centered on the hippocampal-entorhinal complex. While head direction cells can be found throughout the brain, spatial tuning outside the hippocampal formation is often non-specific or conjunctive to other representations such as a reward. Although the precise mechanism of spatially selective firing activity is not understood, various studies show sensory inputs, particularly vision, heavily modulate spatial representation in the hippocampal-entorhinal circuit. To better understand the contribution of other sensory inputs in shaping spatial representation in the brain, we performed recording from the primary somatosensory cortex in foraging rats. To our surprise, we were able to detect the full complement of spatially selective firing patterns similar to that reported in the hippocampal-entorhinal network, namely, place cells, head direction cells, boundary vector/border cells, grid cells and conjunctive cells, in the somatosensory cortex. These newly identified somatosensory spatial cells form a spatial map outside the hippocampal formation and support the hypothesis that location information modulates body representation in the somatosensory cortex. Our findings provide transformative insights into our understanding of how spatial information is processed and integrated in the brain, as well as functional operations of the somatosensory cortex in the context of rehabilitation with brain-machine interfaces.
APA, Harvard, Vancouver, ISO, and other styles
9

Korneva, VALENTINA. "THE LANGUAGE REPRESENTATION OF SPATIAL ORIENTATION IN SPANISH." Cuadernos Iberoamericanos, no. 2 (June 28, 2016): 139–44. http://dx.doi.org/10.46272/2409-3416-2016-2-139-144.

Full text
Abstract:
The article specifies the concept of spatial orientation and the potentials of the construction nombre sus-tantivo + adverbio are identified to determine the position of the human body and the body of an animal and an inanimate object in Spanish as well. It describes the functional capacity of the construction and the features of its representation at the lexical, morphological and syntactic levels
APA, Harvard, Vancouver, ISO, and other styles
10

Mora, Laura, Anna Sedda, Teresa Esteban, and Gianna Cocchini. "The signing body: extensive sign language practice shapes the size of hands and face." Experimental Brain Research 239, no. 7 (May 24, 2021): 2233–49. http://dx.doi.org/10.1007/s00221-021-06121-9.

Full text
Abstract:
AbstractThe representation of the metrics of the hands is distorted, but is susceptible to malleability due to expert dexterity (magicians) and long-term tool use (baseball players). However, it remains unclear whether modulation leads to a stable representation of the hand that is adopted in every circumstance, or whether the modulation is closely linked to the spatial context where the expertise occurs. To this aim, a group of 10 experienced Sign Language (SL) interpreters were recruited to study the selective influence of expertise and space localisation in the metric representation of hands. Experiment 1 explored differences in hands’ size representation between the SL interpreters and 10 age-matched controls in near-reaching (Condition 1) and far-reaching space (Condition 2), using the localisation task. SL interpreters presented reduced hand size in near-reaching condition, with characteristic underestimation of finger lengths, and reduced overestimation of hands and wrists widths in comparison with controls. This difference was lost in far-reaching space, confirming the effect of expertise on hand representations is closely linked to the spatial context where an action is performed. As SL interpreters are also experts in the use of their face with communication purposes, the effects of expertise in the metrics of the face were also studied (Experiment 2). SL interpreters were more accurate than controls, with overall reduction of width overestimation. Overall, expertise modifies the representation of relevant body parts in a specific and context-dependent manner. Hence, different representations of the same body part can coexist simultaneously.
APA, Harvard, Vancouver, ISO, and other styles
11

Tajadura-Jiménez, Ana, Aleksander Väljamäe, Iwaki Toshima, Toshitaka Kimura, Manos Tsakiris, and Norimichi Kitagawa. "Action sounds recalibrate perceived tactile distance." Seeing and Perceiving 25 (2012): 217. http://dx.doi.org/10.1163/187847612x648431.

Full text
Abstract:
Almost every bodily movement, from the most complex to the most mundane, such as walking, can generate impact sounds that contain spatial information of high temporal resolution. Despite the conclusive evidence about the role that the integration of vision, touch and proprioception plays in updating body-representations, hardly any study has looked at the contribution of audition. We show that the representation of a key property of one’s body, like its length, is affected by the sound of one’s actions. Participants tapped on a surface while progressively extending their right arm sideways, and in synchrony with each tap participants listened to a tapping sound. In the critical condition, the sound originated at double the distance at which participants actually tapped. After exposure to this condition, tactile distances on the test right arm, as compared to distances on the reference left arm, felt bigger than those before the exposure. No evidence of changes in tactile distance reports was found at the quadruple tapping sound distance or the asynchronous auditory feedback conditions. Our results suggest that tactile perception is referenced to an implicit body-representation which is informed by auditory feedback. This is the first evidence of the contribution of self-produced sounds to body-representation, addressing the auditory-dependent plasticity of body-representation and its spatial boundaries.
APA, Harvard, Vancouver, ISO, and other styles
12

Greenfield, Katie, Danielle Ropar, Kristy Themelis, Natasha Ratcliffe, and Roger Newport. "Developmental Changes in Sensitivity to Spatial and Temporal Properties of Sensory Integration Underlying Body Representation." Multisensory Research 30, no. 6 (2017): 467–84. http://dx.doi.org/10.1163/22134808-00002591.

Full text
Abstract:
The closer in time and space that two or more stimuli are presented, the more likely it is that they will be integrated together. A recent study by Hillock-Dunn and Wallace (2012) reported that the size of the visuo-auditory temporal binding window — the interval within which visual and auditory inputs are highly likely to be integrated — narrows over childhood. However, few studies have investigated how sensitivity to temporal and spatial properties of multisensory integration underlying body representation develops in children. This is not only important for sensory processes but has also been argued to underpin social processes such as empathy and imitation (Schütz-Bosbach et al., 2006). We tested 4 to 11 year-olds’ ability to detect a spatial discrepancy between visual and proprioceptive inputs (Experiment One) and a temporal discrepancy between visual and tactile inputs (Experiment Two) for hand representation. The likelihood that children integrated spatially separated visuo-proprioceptive information, and temporally asynchronous visuo-tactile information, decreased significantly with age. This suggests that spatial and temporal rules governing the occurrence of multisensory integration underlying body representation are refined with age in typical development.
APA, Harvard, Vancouver, ISO, and other styles
13

Roschin, Vadim Y., Alexander A. Frolov, Yves Burnod, and Marc A. Maier. "A Neural Network Model for the Acquisition of a Spatial Body Scheme Through Sensorimotor Interaction." Neural Computation 23, no. 7 (July 2011): 1821–34. http://dx.doi.org/10.1162/neco_a_00138.

Full text
Abstract:
This letter presents a novel unsupervised sensory matching learning technique for the development of an internal representation of three-dimensional information. The representation is invariant with respect to the sensory modalities involved. Acquisition of the internal representation is demonstrated with a neural network model of a sensorimotor system of a simple model creature, consisting of a tactile-sensitive body and a multiple-degrees-of-freedom arm with proprioceptive sensitivity. Acquisition of the 3D representation as well as a distributed representation of the body scheme, occurs through sensorimotor interactions (i.e., the sensory-motor experience of the creature). Convergence of the learning is demonstrated through computer simulations for the model creature with a 7-DoF arm and a spherical body covered by 20 tactile fields.
APA, Harvard, Vancouver, ISO, and other styles
14

朱, 荣娟. "Embodied Numerosity: Sensori-Motor and Body Movement Influence Spatial-Numerical Representation." Advances in Psychology 06, no. 10 (2016): 1108–16. http://dx.doi.org/10.12677/ap.2016.610140.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Guenther, Frank H., Daniel Bullock, Douglas Greve, and Stephen Grossberg. "Neural Representations for Sensorimotor Control. III. Learning a Body-Centered Representation of a Three-Dimensional Target Position." Journal of Cognitive Neuroscience 6, no. 4 (July 1994): 341–58. http://dx.doi.org/10.1162/jocn.1994.6.4.341.

Full text
Abstract:
A neural model is described of how the brain may autonomously learn a body-centered representation of a three-dimensional (3-D) target position by combining information about retinal target position, eye position, and head position in real time. Such a body-centered spatial representation enables accurate movement commands to the limbs to be generated despite changes in the spatial relationships between the eyes, head, body, and limbs through time. The model learns a vector representation—otherwise known as a parcellated distributed representation—of target vergence with respect to the two eyes, and of the horizontal and vertical spherical angles of the target with respect to a cyclopean egocenter. Such a vergence-spherical representation has been reported in the caudal midbrain and medulla of the frog, as well as in psychophysical movement studies in humans. A head-centered vergence-spherical representation of foveated target position can be generated by two stages of opponent processing that combine corollary discharges of outflow movement signals to the two eyes. Sums and differences of opponent signals define angular and vergence coordinates, respectively. The head-centered representation interacts with a binocular visual representation of nonfoveated target position to learn a visuomotor representation of both foveated and nonfoveated target position that is capable of commanding yoked eye movements. This head-centered vector representation also interacts with representations of neck movement commands to learn a body-centered estimate of target position that is capable of Commanding coordinated arm movements. Learning occurs during head movements made while gaze remains fixed on a foveated target. An initial estimate is stored and a VOR-mediated gating signal prevents the stored estimate from being reset during a gaze-maintaining head movement. As the head moves, new estimates are compared with the stored estimate to compute difference vectors which act as error signals that drive the learning process, as well as control the on-line merging of multimodal information.
APA, Harvard, Vancouver, ISO, and other styles
16

Vallar, Giuseppe. "A hemispheric asymmetry in somatosensory processing." Behavioral and Brain Sciences 30, no. 2 (April 2007): 223–24. http://dx.doi.org/10.1017/s0140525x0700163x.

Full text
Abstract:
AbstractThe model presented in the target article includes feature processing and higher representations. I argue, based on neuropsychological evidence, that spatial representations are also involved in perceptual awareness of somatosensory events. Second, there is an asymmetry, with a right-hemisphere–based bilateral representation of the body. Third, the specific aspect of bodily awareness concerning motor function monitoring involves a network that includes the premotor cortex.
APA, Harvard, Vancouver, ISO, and other styles
17

McCabe, D. P., D. I. Ben-Tovim, M. K. Walker, and D. Pomeroy. "Does the Body Image Exist in Three Dimensions? The Study of Visual Mental Representation of a Body and a Nonbody Object." Perceptual and Motor Skills 92, no. 1 (February 2001): 223–33. http://dx.doi.org/10.2466/pms.2001.92.1.223.

Full text
Abstract:
Do the mental Images of 3-dimensional objects recreate the depth characteristics of the original objects' This investigation of the characteristics of mental images utilized a novel boundary-detection task that required participants to relate a pair of crosses to the boundary of an image mentally projected onto a computer screen. 48 female participants with body attitudes within expected normal range were asked to image their own body and a familiar object from the front and the side. When the visual mental image was derived purely from long-term memory, accuracy was better than chance for the front (64%) and side (63%) of the body and also for the front (55%) and side (68%) of the familiar nonbody object. This suggests that mental images containing depth and spatial information may be generated from information held in long-term memory. Pictorial exposure to views of the front or side of the objects was used to investigate the representations from which this 3-dimensional shape and size information is derived. The results are discussed in terms of three possible representational formats and argue that a front-view 2½-dimensional representation mediates the transfer of information from long-term memory when depth information about the body is required.
APA, Harvard, Vancouver, ISO, and other styles
18

Auclair, Laurent, and Isabelle Jambaqué. "Lexical-semantic body knowledge in 5- to 11-year-old children: How spatial body representation influences body semantics." Child Neuropsychology 21, no. 4 (May 9, 2014): 451–64. http://dx.doi.org/10.1080/09297049.2014.912623.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Abalașei, Beatrice, and Florin Trofin. "Considerations on the correlation between real body and body image." Timisoara Physical Education and Rehabilitation Journal 9, no. 16 (September 1, 2016): 7–12. http://dx.doi.org/10.1515/tperj-2016-0001.

Full text
Abstract:
Abstract Every individual in the society has a representation of it's own body in relation to the spatial cues, postural cues, time cues, etc., considered by specialists the body scheme. Throughout its development, the human being goes through different stages of organization of both the image the and body scheme. We start carrying out this study from the idea that there could be, in male individuals, a link between body representation (own image projected outwardly apparent by reference to an image presented through a questionnaire) and anthropological parameters, such as body fat and body mass index. The study was conducted on a total of 28 subjects, aged 22.71 ± 2.62 years, height of 177.11 ± 6.76 cm and body weight of 73.56 ± 12.60 kg. For these subjects the body composition has been determined by electromagnetic bioimpendance technique and projection of the self was assesed through a questionnaire. After analyzing statistical data, our hypothesis was refuted by the lack of mathematical connections between the variables analyzed.
APA, Harvard, Vancouver, ISO, and other styles
20

Ishida, Hiroaki, Katsumi Nakajima, Masahiko Inase, and Akira Murata. "Shared Mapping of Own and Others' Bodies in Visuotactile Bimodal Area of Monkey Parietal Cortex." Journal of Cognitive Neuroscience 22, no. 1 (January 2010): 83–96. http://dx.doi.org/10.1162/jocn.2009.21185.

Full text
Abstract:
Parietal cortex contributes to body representations by integrating visual and somatosensory inputs. Because mirror neurons in ventral premotor and parietal cortices represent visual images of others' actions on the intrinsic motor representation of the self, this matching system may play important roles in recognizing actions performed by others. However, where and how the brain represents others' bodies and correlates self and other body representations remain unclear. We expected that a population of visuotactile neurons in simian parietal cortex would represent not only own but others' body parts. We first searched for parietal visuotactile bimodal neurons in the ventral intraparietal area and area 7b of monkeys, and then examined the activity of these neurons while monkeys were observing visual or tactile stimuli placed on the experimenter's body parts. Some bimodal neurons with receptive fields (RFs) anchored on the monkey's body exhibited visual responses matched to corresponding body parts of the experimenter, and visual RFs near that body part existed in the peripersonal space within approximately 30 cm from the body surface. These findings suggest that the brain could use self representation as a reference for perception of others' body parts in parietal cortex. These neurons may contribute to spatial matching between the bodies of the self and others in both action recognition and imitation.
APA, Harvard, Vancouver, ISO, and other styles
21

Zhang, Bo, and Yuji Naya. "Medial Prefrontal Cortex Represents the Object-Based Cognitive Map When Remembering an Egocentric Target Location." Cerebral Cortex 30, no. 10 (June 2, 2020): 5356–71. http://dx.doi.org/10.1093/cercor/bhaa117.

Full text
Abstract:
Abstract A cognitive map, representing an environment around oneself, is necessary for spatial navigation. However, compared with its constituent elements such as individual landmarks, neural substrates of coherent spatial information, which consists in a relationship among the individual elements, remain largely unknown. The present study investigated how the brain codes map-like representations in a virtual environment specified by the relative positions of three objects. Representational similarity analysis revealed an object-based spatial representation in the hippocampus (HPC) when participants located themselves within the environment, while the medial prefrontal cortex (mPFC) represented it when they recollected a target object’s location relative to their self-body. During recollection, task-dependent functional connectivity increased between the two areas implying exchange of self-location and target location signals between the HPC and mPFC. Together, the object-based cognitive map, whose coherent spatial information could be formed by objects, may be recruited in the HPC and mPFC for complementary functions during navigation, which may generalize to other aspects of cognition, such as navigating social interactions.
APA, Harvard, Vancouver, ISO, and other styles
22

Taylor, Holly A., Tad T. Brunyé, and Scott T. Taylor. "Spatial Mental Representation: Implications for Navigation System Design." Reviews of Human Factors and Ergonomics 4, no. 1 (October 2008): 1–40. http://dx.doi.org/10.1518/155723408x342835.

Full text
Abstract:
Similarities exist in how people process and represent spatial information and in the factors that contribute to disorientation, whether one is moving through airspace, on the ground, or surgically within the body. As such, design principles for presenting spatial information should bear similarities across these domains but also be somewhat specific to each. In this chapter, we review research in spatial cognition and its application to navigation system design for within-vehicle, aviation, and endoscopic navigation systems. Taken together, the research suggests three general principles for navigation system design consideration. First, multimedia displays should present spatial information visually and action and description information verbally. Second, display organizations should meet users' dynamic navigational goals. Third, navigation systems should be adaptable to users' spatial information preferences. Designers of adaptive navigation display technologies can maximize the effectiveness of those technologies by appealing to the basic spatial cognition processes employed by all users while conforming to user's domain-specific requirements.
APA, Harvard, Vancouver, ISO, and other styles
23

Tuena, Cosimo, Silvia Serino, Elisa Pedroli, Marco Stramba-Badiale, Giuseppe Riva, and Claudia Repetto. "Building Embodied Spaces for Spatial Memory Neurorehabilitation with Virtual Reality in Normal and Pathological Aging." Brain Sciences 11, no. 8 (August 14, 2021): 1067. http://dx.doi.org/10.3390/brainsci11081067.

Full text
Abstract:
Along with deficits in spatial cognition, a decline in body-related information is observed in aging and is thought to contribute to impairments in navigation, memory, and space perception. According to the embodied cognition theories, bodily and environmental information play a crucial role in defining cognitive representations. Thanks to the possibility to involve body-related information, manipulate environmental stimuli, and add multisensory cues, virtual reality is one of the best candidates for spatial memory rehabilitation in aging for its embodied potential. However, current virtual neurorehabilitation solutions for aging and neurodegenerative diseases are in their infancy. Here, we discuss three concepts that could be used to improve embodied representations of the space with virtual reality. The virtual bodily representation is the combination of idiothetic information involved during virtual navigation thanks to input/output devices; the spatial affordances are environmental or symbolic elements used by the individual to act in the virtual environment; finally, the virtual enactment effect is the enhancement on spatial memory provided by actively (cognitively and/or bodily) interacting with the virtual space and its elements. Theoretical and empirical findings will be presented to propose innovative rehabilitative solutions in aging for spatial memory and navigation.
APA, Harvard, Vancouver, ISO, and other styles
24

Manfron, Louise, Camille Vanderclausen, and Valéry Legrain. "No Evidence for an Effect of the Distance Between the Hands on Tactile Temporal Order Judgments." Perception 50, no. 4 (March 2, 2021): 294–307. http://dx.doi.org/10.1177/0301006621998877.

Full text
Abstract:
Localizing somatosensory stimuli is an important process, as it allows us to spatially guide our actions toward the object entering in contact with the body. Accordingly, the positions of tactile inputs are coded according to both somatotopic and spatiotopic representations, the latter one considering the position of the stimulated limbs in external space. The spatiotopic representation has often been evidenced by means of temporal order judgment (TOJ) tasks. Participants’ judgments about the order of appearance of two successive somatosensory stimuli are less accurate when the hands are crossed over the body midline than uncrossed but also when participants’ hands are placed close together when compared with farther away. Moreover, these postural effects might depend on the vision of the stimulated limbs. The aim of this study was to test the influence of seeing the hands, on the modulation of tactile TOJ by the spatial distance between the stimulated limbs. The results showed no influence of the distance between the stimulated hands on TOJ performance and prevent us from concluding whether vision of the hands affects TOJ performance, or whether these variables interact. The reliability of such distance effect to investigate the spatial representations of tactile inputs is questioned.
APA, Harvard, Vancouver, ISO, and other styles
25

Cardini, Flavia, Patrick Haggard, and Elisabetta Ladavas. "Seeing and feeling for self and other: Proprioceptive spatial location determines multisensory enhancement of touch." Seeing and Perceiving 25 (2012): 113. http://dx.doi.org/10.1163/187847612x647469.

Full text
Abstract:
In the Visual Enhancement of Touch (VET), simply viewing one’s hand improves tactile spatial perception, even though vision is non-informative. While previous studies had suggested that looking at another person’s hand could also enhance tactile perception, no previous study had systematically investigated the differences between viewing one’s body and someone else’s. The aim of this study was to shed light on the relation between visuo–tactile interactions and the self-other distinction. In Experiment 1 we manipulated the spatial location where a hand was seen. Viewing one’s hand enhanced tactile acuity relative to viewing a neutral object, but only when the image of the hand was spatially aligned with the actual location of the participant’s unseen hand. The VET effect did not occur when one’s hand was viewed at a location other than that experienced proprioceptively. In contrast, viewing another’s hand produced enhanced tactile perception irrespective of spatial location. In Experiment 2, we used a multisensory stimulation technique, known as Visual Remapping of Touch, to reduce perceived spatial misalignment of vision and touch. When participants saw an image of their own hand being touched at the same time as the tactile stimulation, the reduction in perceived misalignment caused VET effect to return, even though the spatial location of the images was not consistent with the actual body posture. Our results suggest that multisensory modulation of touch depends on a representation of one’s body that is fundamentally spatial in nature. In contrast, representation of others is free from this spatial constraint.
APA, Harvard, Vancouver, ISO, and other styles
26

Adibpour, Parvaneh, Jean-Rémy Hochmann, and Liuba Papeo. "Spatial Relations Trigger Visual Binding of People." Journal of Cognitive Neuroscience 33, no. 7 (June 1, 2021): 1343–53. http://dx.doi.org/10.1162/jocn_a_01724.

Full text
Abstract:
Abstract To navigate the social world, humans must represent social entities and the relationships between those entities, starting with spatial relationships. Recent research suggests that two bodies are processed with particularly high efficiency in visual perception, when they are in a spatial positioning that cues interaction, that is, close and face-to-face. Socially relevant spatial relations such as facingness may facilitate visual perception by triggering grouping of bodies into a new integrated percept, which would make the stimuli more visible and easier to process. We used EEG and a frequency-tagging paradigm to measure a neural correlate of grouping (or visual binding), while female and male participants saw images of two bodies face-to-face or back-to-back. The two bodies in a dyad flickered at frequency F1 and F2, respectively, and appeared together at a third frequency Fd (dyad frequency). This stimulation should elicit a periodic neural response for each body at F1 and F2, and a third response at Fd, which would be larger for face-to-face (vs. back-to-back) bodies, if those stimuli yield additional integrative processing. Results showed that responses at F1 and F2 were higher for upright than for inverted bodies, demonstrating that our paradigm could capture neural activity associated with viewing bodies. Crucially, the response to dyads at Fd was larger for face-to-face (vs. back-to-back) dyads, suggesting integration mediated by grouping. We propose that spatial relations that recur in social interaction (i.e., facingness) promote binding of multiple bodies into a new representation. This mechanism can explain how the visual system contributes to integrating and transforming the representation of disconnected body shapes into structured representations of social events.
APA, Harvard, Vancouver, ISO, and other styles
27

Fang, Wen, Junru Li, Guangyao Qi, Shenghao Li, Mariano Sigman, and Liping Wang. "Statistical inference of body representation in the macaque brain." Proceedings of the National Academy of Sciences 116, no. 40 (September 3, 2019): 20151–57. http://dx.doi.org/10.1073/pnas.1902334116.

Full text
Abstract:
The sense of one’s own body is a pillar of self-consciousness and could be investigated by inducing human illusions of artificial objects as part of the self. Here, we present a nonhuman primate version of a rubber-hand illusion that allowed us to determine its computational and neuronal mechanisms. We implemented a video-based system in a reaching task in monkeys and combined a casual inference model to establish an objective and quantitative signature for the monkey’s body representation. Similar to humans, monkeys were more likely to perceive an external object as part of the self when the dynamics (spatial disparity) and the features (shape and structure) of visual (V) input was closer to proprioceptive (P) signals. Neural signals in the monkey’s premotor cortex reflected the strength of illusion and the likelihood of misattributing the illusory hand to oneself, thus, revealing a cortical representation of body ownership.
APA, Harvard, Vancouver, ISO, and other styles
28

Qinhui, Feng, Wang Weiqiang, and Allam Maalla. "Perception and Behavioral Intention of Cycling Space on Urban Greenway." E3S Web of Conferences 276 (2021): 02010. http://dx.doi.org/10.1051/e3sconf/202127602010.

Full text
Abstract:
To explore the spatial power and spatial relationship of urban greenway sports cultural memory, using literature data induction analysis method, questionnaire survey method, interview method, and other research methods, the connotation of urban greenway sports cultural memory is analyzed, and the urban greenway sports are explained. Using Lefebvre’s ternary dialectics to study the production process of Guangzhou greenway’s sports cultural memory space, its spatial practice is in the interpretation of Guangzhou’s Greenway sports cultural memory. The government is the leading force in the memory of greenway sports culture. It is a necessary prerequisite for the cultural memory of greenway sports, and the public is the main body of the cultural memory of greenway sports. The spatial representation is in the context of the cultural memory of greenway sports, sorting out the hard memory and soft memory in the memory field of Guangzhou Greenway Sports Culture. The representational space is the sports cultural memory space experienced by the greenway activists, space directly “lived” by the greenway activists, and the internalization of the cognition, experience, and spatial representation of the sport’s cultural memory of the greenway activists.
APA, Harvard, Vancouver, ISO, and other styles
29

Caggiano, Pietro, Elena Bertone, and Gianna Cocchini. "Same action in different spatial locations induces selective modulation of body metric representation." Experimental Brain Research 239, no. 8 (June 17, 2021): 2509–18. http://dx.doi.org/10.1007/s00221-021-06135-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Reinersmann, Annika, Julia Landwehrt, Elena K. Krumova, Sebastian Ocklenburg, Onur Güntürkün, and Christoph Maier. "Impaired spatial body representation in complex regional pain syndrome type 1 (CRPS I)." Pain 153, no. 11 (November 2012): 2174–81. http://dx.doi.org/10.1016/j.pain.2012.05.025.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Leplaideur, Stephanie, Annelise Moulinet-Raillon, Quentin Duché, Lucie Chochina, Karim Jamal, Jean-Christophe Ferré, Elise Bannier, and Isabelle Bonan. "The Neural Bases of Egocentric Spatial Representation for Extracorporeal and Corporeal Tasks: An fMRI Study." Brain Sciences 11, no. 8 (July 22, 2021): 963. http://dx.doi.org/10.3390/brainsci11080963.

Full text
Abstract:
(1) Background: Humans use reference frames to elaborate the spatial representations needed for all space-oriented behaviors such as postural control, walking, or grasping. We investigated the neural bases of two egocentric tasks: the extracorporeal subjective straight-ahead task (SSA) and the corporeal subjective longitudinal body plane task (SLB) in healthy participants using functional magnetic resonance imaging (fMRI). This work was an ancillary part of a study involving stroke patients. (2) Methods: Seventeen healthy participants underwent a 3T fMRI examination. During the SSA, participants had to divide the extracorporeal space into two equal parts. During the SLB, they had to divide their body along the midsagittal plane. (3) Results: Both tasks elicited a parieto-occipital network encompassing the superior and inferior parietal lobules and lateral occipital cortex, with a right hemispheric dominance. Additionally, the SLB > SSA contrast revealed activations of the left angular and premotor cortices. These areas, involved in attention and motor imagery suggest a greater complexity of corporeal processes engaging body representation. (4) Conclusions: This was the first fMRI study to explore the SLB-related activity and its complementarity with the SSA. Our results pave the way for the exploration of spatial cognitive impairment in patients.
APA, Harvard, Vancouver, ISO, and other styles
32

Seo, Min-Hee, Jeh-Kwang Ryu, Byung-Cheol Kim, Sang-Bin Jeon, and Kyoung-Min Lee. "Persistence of metric biases in body representation during the body ownership illusion." PLOS ONE 17, no. 7 (July 26, 2022): e0272084. http://dx.doi.org/10.1371/journal.pone.0272084.

Full text
Abstract:
Our perception of the body’s metric is influenced by bias according to the axis, called the systematic metric bias in body representation. Systematic metric bias was first reported as Weber’s illusion and observed in several parts of the body in various patterns. However, the systematic metric bias was not observed with a fake hand under the influence of the body ownership illusion during the line length judgment task. The lack of metric bias observed during the line length judgment task with a fake hand implies that the tactile modality occupies a relatively less dominant position than perception occurring through the real body. The change in weight between visual and tactile modalities during the body ownership illusion has not been adequately investigated yet, despite being a factor that influences the perception through body ownership illusion. Therefore, this study aimed to investigate whether the dominance of vision over tactile modality is prominent, regardless of the task type. To investigate whether visual dominance persists during the process of inducing body ownership illusion regardless of task type, we introduced spatial visuotactile incongruence (2 cm, 3 cm) in the longitudinal and transverse axes during the visuotactile localization tasks and measured the intensity of the body ownership illusion using a questionnaire. The results indicated that participants perceived smaller visuotactile incongruence when the discrepancy occurred in the transverse axis rather than in the longitudinal axis. The anisotropy in the tolerance of visuotactile incongruence implies the persistence of metric biases in body representation. The results suggest the need for further research regarding the factors influencing the weight of visual and tactile modalities.
APA, Harvard, Vancouver, ISO, and other styles
33

Stackelberg, Katharine T. Von. "Garden Hybrids." Classical Antiquity 33, no. 2 (October 1, 2014): 395–426. http://dx.doi.org/10.1525/ca.2014.33.2.395.

Full text
Abstract:
This article discusses representations of hermaphrodites in the domestic context of Roman gardens and argues that the spatial context of the hermaphrodite body is as germane to critical understanding as the intersexed body itself. The spatial and semantic interrelations between Roman gardens and hermaphrodite images focus on the dynamics of viewing hermaphrodite types in Italo-Roman art (section 1), the spatial configuration of hermaphrodites with documented findspots (section 2), Ovid's introduction of garden imagery in the tale of Salmacis and Hermaphroditus (Met. 4. 285–388) compared to the Salmakis inscription from Kaplan Kalesi at Halicarnassus (section 3), and the historical correlation to Augustan Rome's vegetative symbolism (section 4). This synthesis of material, literary, and historical evidence for hermaphrodite images indicates that their representation in Roman domestic art can be read as an expression of domestic harmony that mirrored the emphasis on heterosexual union and political concord ushered in by Augustus and Livia.
APA, Harvard, Vancouver, ISO, and other styles
34

Egli, Richard, and Neil F. Stewart. "Chain-Model Shape-Pattern Schemata." Environment and Planning B: Planning and Design 29, no. 5 (October 2002): 779–88. http://dx.doi.org/10.1068/b12842.

Full text
Abstract:
A shape-pattern schema is an organized body of knowledge about spatial relationships between shapes which describes the patterns, syntactic structure, and the characteristics of shape patterns. In this paper we show how such schema can be represented by means of chain models. We also show the advantage of this approach (relative to the previously suggested tree representations) for patterns with certain natural symmetries. To do this, we describe an example, and discuss its implementation by means of the application procedural interface of our system. Because the chain-model formulation subsumes the tree representation as a special case, the chain-model approach can also be used wherever the tree representation would be appropriate.
APA, Harvard, Vancouver, ISO, and other styles
35

Medendorp, W. Pieter. "Spatial constancy mechanisms in motor control." Philosophical Transactions of the Royal Society B: Biological Sciences 366, no. 1564 (February 27, 2011): 476–91. http://dx.doi.org/10.1098/rstb.2010.0089.

Full text
Abstract:
The success of the human species in interacting with the environment depends on the ability to maintain spatial stability despite the continuous changes in sensory and motor inputs owing to movements of eyes, head and body. In this paper, I will review recent advances in the understanding of how the brain deals with the dynamic flow of sensory and motor information in order to maintain spatial constancy of movement goals. The first part summarizes studies in the saccadic system, showing that spatial constancy is governed by a dynamic feed-forward process, by gaze-centred remapping of target representations in anticipation of and across eye movements. The subsequent sections relate to other oculomotor behaviour, such as eye–head gaze shifts, smooth pursuit and vergence eye movements, and their implications for feed-forward mechanisms for spatial constancy. Work that studied the geometric complexities in spatial constancy and saccadic guidance across head and body movements, distinguishing between self-generated and passively induced motion, indicates that both feed-forward and sensory feedback processing play a role in spatial updating of movement goals. The paper ends with a discussion of the behavioural mechanisms of spatial constancy for arm motor control and their physiological implications for the brain. Taken together, the emerging picture is that the brain computes an evolving representation of three-dimensional action space, whose internal metric is updated in a nonlinear way, by optimally integrating noisy and ambiguous afferent and efferent signals.
APA, Harvard, Vancouver, ISO, and other styles
36

FAN, YALE. "QUANTUM SIMULATION OF SIMPLE MANY-BODY DYNAMICS." International Journal of Quantum Information 10, no. 05 (August 2012): 1250049. http://dx.doi.org/10.1142/s0219749912500499.

Full text
Abstract:
We describe a general quantum computational algorithm that simulates the time evolution of an arbitrary nonrelativistic, Coulombic many-body system in three dimensions, considering only spatial degrees of freedom. We use a simple discretized model of Schrödinger evolution in the coordinate representation and discuss detailed constructions of the operators necessary to realize the scheme of Wiesner and Zalka. The algorithm is simulated numerically for small test cases, and its outputs are found to be in good agreement with analytical solutions.
APA, Harvard, Vancouver, ISO, and other styles
37

Wang, Yangyang, Yibo Li, and Xiaofei Ji. "Human Action Recognition Based on Normalized Interest Points and Super-Interest Points." International Journal of Humanoid Robotics 11, no. 01 (March 2014): 1450005. http://dx.doi.org/10.1142/s0219843614500054.

Full text
Abstract:
Visual-based human action recognition is currently one of the most active research topics in computer vision. The feature representation directly has a crucial impact on the performance of the recognition. Feature representation based on bag-of-words is popular in current research, but the spatial and temporal relationship among these features is usually discarded. In order to solve this issue, a novel feature representation based on normalized interest points is proposed and utilized to recognize the human actions. The novel representation is called super-interest point. The novelty of the proposed feature is that the spatial-temporal correlation between the interest points and human body can be directly added to the representation without considering scale and location variance of the points by introducing normalized points clustering. The novelty concerns three tasks. First, to solve the diversity of human location and scale, interest points are normalized based on the normalization of the human region. Second, to obtain the spatial-temporal correlation among the interest points, the normalized points with similar spatial and temporal distance are constructed to a super-interest point by using three-dimensional clustering algorithm. Finally, by describing the appearance characteristic of the super-interest points and location relationship among the super-interest points, a new feature representation is gained. The proposed representation formation sets up the relationship among local features and human figure. Experiments on Weizmann, KTH, and UCF sports dataset demonstrate that the proposed feature is effective for human action recognition.
APA, Harvard, Vancouver, ISO, and other styles
38

Goossens, H. H. L. M., and A. J. van Opstal. "Influence of Head Position on the Spatial Representation of Acoustic Targets." Journal of Neurophysiology 81, no. 6 (June 1, 1999): 2720–36. http://dx.doi.org/10.1152/jn.1999.81.6.2720.

Full text
Abstract:
Influence of head position on the spatial representation of acoustic targets. Sound localization in humans relies on binaural differences (azimuth cues) and monaural spectral shape information (elevation cues) and is therefore the result of a neural computational process. Despite the fact that these acoustic cues are referenced with respect to the head, accurate eye movements can be generated to sounds in complete darkness. This ability necessitates the use of eye position information. So far, however, sound localization has been investigated mainly with a fixed head position, usually straight ahead. Yet the auditory system may rely on head motor information to maintain a stable and spatially accurate representation of acoustic targets in the presence of head movements. We therefore studied the influence of changes in eye-head position on auditory-guided orienting behavior of human subjects. In the first experiment, we used a visual-auditory double-step paradigm. Subjects made saccadic gaze shifts in total darkness toward brief broadband sounds presented before an intervening eye-head movement that was evoked by an earlier visual target. The data show that the preceding displacements of both eye and head are fully accounted for, resulting in spatially accurate responses. This suggests that auditory target information may be transformed into a spatial (or body-centered) frame of reference. To further investigate this possibility, we exploited the unique property of the auditory system that sound elevation is extracted independently from pinna-related spectral cues. In the absence of such cues, accurate elevation detection is not possible, even when head movements are made. This is shown in a second experiment where pure tones were localized at a fixed elevation that depended on the tone frequency rather than on the actual target elevation, both under head-fixed and -free conditions. To test, in a third experiment, whether the perceived elevation of tones relies on a head- or space-fixed target representation, eye movements were elicited toward pure tones while subjects kept their head in different vertical positions. It appeared that each tone was localized at a fixed, frequency-dependent elevation in space that shifted to a limited extent with changes in head elevation. Hence information about head position is used under static conditions too. Interestingly, the influence of head position also depended on the tone frequency. Thus tone-evoked ocular saccades typically showed a partial compensation for changes in static head position, whereas noise-evoked eye-head saccades fully compensated for intervening changes in eye-head position. We propose that the auditory localization system combines the acoustic input with head-position information to encode targets in a spatial (or body-centered) frame of reference. In this way, accurate orienting responses may be programmed despite intervening eye-head movements. A conceptual model, based on the tonotopic organization of the auditory system, is presented that may account for our findings.
APA, Harvard, Vancouver, ISO, and other styles
39

Vallar, Giuseppe. "Spatial frames of reference and somatosensory processing: a neuropsychological perspective." Philosophical Transactions of the Royal Society of London. Series B: Biological Sciences 352, no. 1360 (October 29, 1997): 1401–9. http://dx.doi.org/10.1098/rstb.1997.0126.

Full text
Abstract:
In patients with lesions in the right hemisphere, frequently involving the posterior parietal regions, left–sided somatosensory (and visual and motor) deficits not only reflect a disorder of primary sensory processes, but also have a higher–order component related to a defective spatial representation of the body. This additional factor, related to right brain damage, is clinically relevant: contralesional hemianaesthesia (and hemianopia and hemiplegia) is more frequent in right brain–damaged patients than in patients with damage to the left side of the brain. Three main lines of investigation suggest the existence of this higher–order pathological factor. (i) Right brain–damaged patients with left hemineglect may show physiological evidence of preserved processing of somatosensory stimuli, of which they are not aware. Similar results have been obtained in the visual domain. (ii) Direction–specific vestibular, visual optokinetic and somatosensory or proprioceptive stimulations may displace spatial frames of reference in right brain–damaged patients with left hemineglect, reducing or increasing the extent of the patients’ ipsilesional rightward directional error, and bring about similar directional effects in normal subjects. These stimulations, which may improve or worsen a number of manifestations of the neglect syndrome (such as extrapersonal and personal hemineglect), have similar effects on the severity of left somatosensory deficits (defective detection of tactile stimuli, position sense disorders). However, visuospatial hemineglect and the somatosensory deficits improved by these stimulations are independent, albeit related, disorders. (iii) The severity of left somatosensory deficits is affected by the spatial position of body segments, with reference to the midsagittal plane of the trunk. A general implication of these observations is that spatial (non–somatotopic) levels of representation contribute to corporeal awareness. The neural basis of these spatial frames includes the posterior parietal and the premotor frontal regions. These spatial representations could provide perceptual–premotor interfaces for the organization of movements (e.g. pointing, locomotion) directed towards targets in personal and extrapersonal space. In line with this view, there is evidence that the sensory stimulations that modulate left somatosensory deficits affect left motor disorders in a similar, direction–specific, fashion.
APA, Harvard, Vancouver, ISO, and other styles
40

Cvoro, Uros. "Monument to anti-monumentality: the space of the National Museum Australia." Museum and Society 4, no. 3 (April 9, 2015): 116–28. http://dx.doi.org/10.29311/mas.v4i3.83.

Full text
Abstract:
This article explores the space of the National Museum Australia as a complex interplay between different spatial levels, and the way in which this interplay enables the NMA to foreground internal tensions architecturally. I am also interested in the way these internal tensions contribute towards creating representations of spaces as politically charged. I argue that the space of the NMA should be read as riven with tension between monumental space and what I refer to as protean monumental space. The tension between the monumental and the protean monumental is always already entailed within the spatial practice and spatial representation of producing the NMA’s space. This tension is internal and central to the museum itself, yet it is a tension that leads to a production of a ‘third space’ that is already predicated by the other two, or is revealed by the experiencing body of the museum visitor.
APA, Harvard, Vancouver, ISO, and other styles
41

Hartmann, Matthias, Martin H. Fischer, and Fred W. Mast. "Sharing a mental number line across individuals? The role of body position and empathy in joint numerical cognition." Quarterly Journal of Experimental Psychology 72, no. 7 (November 7, 2018): 1732–40. http://dx.doi.org/10.1177/1747021818809254.

Full text
Abstract:
A growing body of research shows that the human brain acts differently when performing a task together with another person than when performing the same task alone. In this study, we investigated the influence of a co-actor on numerical cognition using a joint random number generation (RNG) task. We found that participants generated relatively smaller numbers when they were located to the left (vs. right) of a co-actor (Experiment 1), as if the two individuals shared a mental number line and predominantly selected numbers corresponding to their relative body position. Moreover, the mere presence of another person on the left or right side or the processing of numbers from loudspeaker on the left or right side had no influence on the magnitude of generated numbers (Experiment 2), suggesting that a bias in RNG only emerged during interpersonal interactions. Interestingly, the effect of relative body position on RNG was driven by participants with high trait empathic concern towards others, pointing towards a mediating role of feelings of sympathy for joint compatibility effects. Finally, the spatial bias emerged only after the co-actors swapped their spatial position, suggesting that joint spatial representations are constructed only after the spatial reference frame became salient. In contrast to previous studies, our findings cannot be explained by action co-representation because the consecutive production of numbers does not involve conflict at the motor response level. Our results therefore suggest that spatial reference coding, rather than motor mirroring, can determine joint compatibility effects. Our results demonstrate how physical properties of interpersonal situations, such as the relative body position, shape seemingly abstract cognition.
APA, Harvard, Vancouver, ISO, and other styles
42

White, Robert L., and Lawrence H. Snyder. "Spatial constancy and the brain: insights from neural networks." Philosophical Transactions of the Royal Society B: Biological Sciences 362, no. 1479 (January 11, 2007): 375–82. http://dx.doi.org/10.1098/rstb.2006.1965.

Full text
Abstract:
To form an accurate internal representation of visual space, the brain must accurately account for movements of the eyes, head or body. Updating of internal representations in response to these movements is especially important when remembering spatial information, such as the location of an object, since the brain must rely on non-visual extra-retinal signals to compensate for self-generated movements. We investigated the computations underlying spatial updating by constructing a recurrent neural network model to store and update a spatial location based on a gaze shift signal, and to do so flexibly based on a contextual cue. We observed a striking similarity between the patterns of behaviour produced by the model and monkeys trained to perform the same task, as well as between the hidden units of the model and neurons in the lateral intraparietal area (LIP). In this report, we describe the similarities between the model and single unit physiology to illustrate the usefulness of neural networks as a tool for understanding specific computations performed by the brain.
APA, Harvard, Vancouver, ISO, and other styles
43

Wang, Jingyi, Yuan Run, and Hongwei Shi. "Emotional state representation and detection method of users in library space based on body posture recognition." Digital Library Perspectives 36, no. 2 (April 16, 2020): 113–25. http://dx.doi.org/10.1108/dlp-11-2019-0041.

Full text
Abstract:
Purpose In the information commons (IC) space of library, it is very important to recognize the emotional state of users for better playing the role of IC. In view of this point, this paper aims to discuss the human expression of user emotion. Design/methodology/approach An emotional state recognition method based on body posture change under video monitoring is proposed. In this method, two parameters are proposed to represent the emotional state of users. Finally, the distribution of users’ overall emotional state is recognized. Findings It is found that the change of human posture reflects the emotional state of users to a certain extent. The spatial frequency of the user’s average body position change and per capita body position change can reflect the spatial distribution of individual and body position change, respectively. Originality/value The method in this paper can effectively overcome the inaccuracy of manual identification of video monitoring images, especially in the case of a large number of users and effectively help the construction of university library IC space and provide a basis for the setting of environmental parameters.
APA, Harvard, Vancouver, ISO, and other styles
44

Van Beuzekom, A. D., and J. A. M. Van Gisbergen. "Properties of the Internal Representation of Gravity Inferred From Spatial-Direction and Body-Tilt Estimates." Journal of Neurophysiology 84, no. 1 (July 2000): 11–27. http://dx.doi.org/10.1152/jn.2000.84.1.11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Oltmann, Andra, Roman Kusche, and Philipp Rostalski. "Spatial Sensitivity of ECG Electrode Placement." Current Directions in Biomedical Engineering 7, no. 2 (October 1, 2021): 151–54. http://dx.doi.org/10.1515/cdbme-2021-2039.

Full text
Abstract:
Abstract The electrocardiogram (ECG) is a well-known technique used to diagnose cardiac diseases. To acquire the spatial signal characteristics from the thorax, multiple electrodes are commonly used. Displacements of electrodes affect the signal morphologies and can lead to incorrect diagnoses. For quantitative analysis of these effects we propose the usage of a numerical computer simulation. In order to create a realistic representation of the human thorax including the heart and lung a three-dimensional model with a simplified geometry is used. The electrical excitation of the heart is modelled on a cellular level via the bidomain approach. To numerically solve the differential equations, describing the signal propagation within the body, we use the finite element method in COMSOL Multiphysics®. The spatial gradients of the simulated body potentials are calculated to determine placement sensitivity maps. The simulated results show that the sensitivity is different for each considered point in time of each ECG wave. In general, the impact of displacement is increased as an electrode is more closely located to the signal source. However, in some specific regions associated with differential ECG leads the placement sensitivity distribution deviates from this simple circular pattern. The results provide useful information to enhance the understanding of placed specific effects on classical ECG features. By additional consideration of patient-specific characteristics in the future, the used model has the ability to investigate additional body-related aspects such as geometrical body shape or composition of various tissue types.
APA, Harvard, Vancouver, ISO, and other styles
46

Vallar, Giuseppe. "Spatial Neglect, Balint-Homes' and Gerstmann's Syndrome, and Other Spatial Disorders." CNS Spectrums 12, no. 7 (July 2007): 527–36. http://dx.doi.org/10.1017/s1092852900021271.

Full text
Abstract:
ABSTRACTBrain-damaged patients with lesion or dysfunction involving the parietal cortex may show a variety of neuropsychological impairments involving spatial cognition. The more frequent and disabling deficit is the syndrome of unilateral spatial neglect that, in a nutshell, consists in a bias of spatial representation and attention ipsilateral to of extrapersonal, personal (ie, the body) space, or both, toward the side of the hemispheric lesion. The deficit is more frequent and severe after damage to the right hemisphere, involving particularly the posterior-inferior parietal cortex at the temporo-parietal junction. Damage to these posterior parietal regions may also impair visuospatial short-term memory, which may be associated with and worsen spatial neglect. The neural network supporting spatial representation, attention and short-term memory is, however, more extensive, including the right premotor cortex. Also disorders of drawing and building objects (traditionally termed constructional apraxia) are a frequent indicator of posterior parietal damage in the left and in the right hemispheres. Other less frequent deficits, which, however, have a relevant localizing value, include optic ataxia (namely, the defective reaching of visual objects, in the absence of elementary visuo-motor impairments), which is typically brought about by damage to the superior parietal lobule. Optic ataxia, together with deficits of visual attention, of estimating distances and depth, and with apraxia of gaze, constitutes the severely disabling Balint-Holmes' syndrome, which is typically associated with bilateral posterior parietal and occipital damage. Finally, lesions of the posterior parietal lobule (angular gyrus) in the left hemisphere may bring about a tetrad of symptoms (left-right disorientation, acalculia, finger agnosia, and agraphia) termed Gerstmann's syndrome, that also exists in a developmental form.
APA, Harvard, Vancouver, ISO, and other styles
47

Amemiya, Tomohiro, Yasushi Ikei, and Michiteru Kitazaki. "Remapping Peripersonal Space by Using Foot-Sole Vibrations Without Any Body Movement." Psychological Science 30, no. 10 (September 23, 2019): 1522–32. http://dx.doi.org/10.1177/0956797619869337.

Full text
Abstract:
The limited space immediately surrounding our body, known as peripersonal space (PPS), has been investigated by focusing on changes in the multisensory processing of audio-tactile stimuli occurring within or outside the PPS. Some studies have reported that the PPS representation is extended by body actions such as walking. However, it is unclear whether the PPS changes when a walking-like sensation is induced but the body neither moves nor is forced to move. Here, we show that a rhythmic pattern consisting of walking-sound vibrations applied to the soles of the feet, but not the forearms, boosted tactile processing when looming sounds were located near the body. The findings suggest that an extension of the PPS representation can be triggered by stimulating the soles in the absence of body action, which may automatically drive a motor program for walking, leading to a change in spatial cognition around the body.
APA, Harvard, Vancouver, ISO, and other styles
48

Corti, Claudia, Niccolò Butti, Alessandra Bardoni, Sandra Strazzer, and Cosimo Urgesi. "Body Processing in Children and Adolescents with Traumatic Brain Injury: An Exploratory Study." Brain Sciences 12, no. 8 (July 22, 2022): 962. http://dx.doi.org/10.3390/brainsci12080962.

Full text
Abstract:
Dysfunctions in body processing have been documented in adults with brain damage, while limited information is available for children. This study aimed to investigate body processing in children and adolescents with traumatic brain injury (TBI) (N = 33), compared to peers with typical development. Two well-known computerized body-representation paradigms, namely Visual Body Recognition and Visuo-spatial Imagery, were administered. Through the first paradigm, the body inversion and composite illusion effects were tested with a matching to sample task as measures of configural and holistic processing of others’ bodies, respectively. The second paradigm investigated with a laterality judgement task the ability to perform first-person and object-based mental spatial transformations of own body and external objects, respectively. Body stimuli did not convey any emotional contents or symbolic meanings. Patients with TBI had difficulties with mental transformations of both body and object stimuli, displaying deficits in motor and visual imagery abilities, not limited to body processing. Therefore, cognitive rehabilitation of body processing in TBI might benefit from the inclusion of both general training on visuo-spatial abilities and specific exercises aimed at boosting visual body perception and motor imagery.
APA, Harvard, Vancouver, ISO, and other styles
49

Lyu, Xin, Yiwei Fang, Baogen Tong, Xin Li, and Tao Zeng. "Multiscale Normalization Attention Network for Water Body Extraction from Remote Sensing Imagery." Remote Sensing 14, no. 19 (October 7, 2022): 4983. http://dx.doi.org/10.3390/rs14194983.

Full text
Abstract:
Extracting water bodies is an important task in remote sensing imagery (RSI) interpretation. Deep convolution neural networks (DCNNs) show great potential in feature learning; they are widely used in the water body interpretation of RSI. However, the accuracy of DCNNs is still unsatisfactory due to differences in the many hetero-features of water bodies, such as spectrum, geometry, and spatial size. To address the problem mentioned above, this paper proposes a multiscale normalization attention network (MSNANet) which can accurately extract water bodies in complicated scenarios. First of all, a multiscale normalization attention (MSNA) module was designed to merge multiscale water body features and highlight feature representation. Then, an optimized atrous spatial pyramid pooling (OASPP) module was developed to refine the representation by leveraging context information, which improves segmentation performance. Furthermore, a head module (FEH) for feature enhancing was devised to realize high-level feature enhancement and reduce training time. The extensive experiments were carried out on two benchmarks: the Surface Water dataset and the Qinghai–Tibet Plateau Lake dataset. The results indicate that the proposed model outperforms current mainstream models on OA (overall accuracy), f1-score, kappa, and MIoU (mean intersection over union). Moreover, the effectiveness of the proposed modules was proven to be favorable through ablation study.
APA, Harvard, Vancouver, ISO, and other styles
50

Werner, J., and M. Buse. "Temperature profiles with respect to inhomogeneity and geometry of the human body." Journal of Applied Physiology 65, no. 3 (September 1, 1988): 1110–18. http://dx.doi.org/10.1152/jappl.1988.65.3.1110.

Full text
Abstract:
Temperature profiles within the human body are highly dependent on the geometry and inhomogeneity of the body. Physical parameters such as density and heat conductivity of the various tissues and variables such as blood flow and metabolic heat production of different organs are spatially distributed and thereby influence the temperature profiles within the human body. Actual physiological knowledge allows one to take into account up to 54 different spatially distributed values for each parameter. An adequate representation of the anatomy of the body requires a spatial three-dimensional grid of at least 0.5-1.0 cm. This is achieved by photogrammetric treatment of three-dimensional anatomic models of the human body. As a first essential result, the simulation system has produced a realistic picture of the topography of temperatures under neutral conditions. Compatibility of reality and simulation was achieved solely on the basis of physical considerations and physiological data base. Therefore the simulation is suited to the extrapolation of temperature profiles that cannot be obtained experimentally.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography