To see the other types of publications on this topic, follow the link: Coded gaze.

Journal articles on the topic 'Coded gaze'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Coded gaze.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Feuston, Jessica L., and Anne Marie Piper. "Beyond the Coded Gaze." Proceedings of the ACM on Human-Computer Interaction 2, CSCW (November 2018): 1–21. http://dx.doi.org/10.1145/3274320.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Cheleski, Dominic J., Isabelle Mareschal, Andrew J. Calder, and Colin W. G. Clifford. "Eye gaze is not coded by cardinal mechanisms alone." Proceedings of the Royal Society B: Biological Sciences 280, no. 1764 (August 7, 2013): 20131049. http://dx.doi.org/10.1098/rspb.2013.1049.

Full text
Abstract:
Gaze is an important social cue in regulating human and non-human interactions. In this study, we employed an adaptation paradigm to examine the mechanisms underlying the perception of another's gaze. Previous research has shown that the interleaved presentation of leftwards and rightwards gazing adaptor stimuli results in observers judging a wider range of gaze deviations as being direct. We applied a similar paradigm to examine how human observers encode oblique (e.g. upwards and to the left) directions of gaze. We presented observers with interleaved gaze adaptors and examined whether adaptation differed between congruent (adaptor and test along same axis) and incongruent conditions. We find greater adaptation in congruent conditions along cardinal (horizontal and vertical) and non-cardinal (oblique) directions suggesting gaze is not coded alone by cardinal mechanisms. Our results suggest that the functional aspects of gaze processing might parallel that of basic visual features such as orientation.
APA, Harvard, Vancouver, ISO, and other styles
3

Pritchett, Lisa M., and Laurence R. Harris. "Perceived touch location is coded using a gaze signal." Experimental Brain Research 213, no. 2-3 (May 11, 2011): 229–34. http://dx.doi.org/10.1007/s00221-011-2713-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Greene, Harold H., and Keith Rayner. "Eye-Movement Control in Direction-Coded Visual Search." Perception 30, no. 2 (February 2001): 147–57. http://dx.doi.org/10.1068/p3056.

Full text
Abstract:
Subjects searched for a target among distractors which were arranged randomly or such that each distractor provided information about the relative position of a target. Trials were presented either in a blocked design (so that the subjects knew a priori the contextual information in the display) or in a mixed design. When the distractors provided information about target position, there were (i) shorter manual RTs, (ii) fewer fixations made in search of the target, (iii) longer mean fixation durations, (iv) shorter initial fixation durations, (v) shorter mean gaze shifts, (vi) a smaller area of fixation dispersion, and (vii) a greater percentage of optimally directed saccades. Except for gaze shifts, the results were uninfluenced by whether or not there was a blocked or a mixed presentation. The results of the study suggest that despite noise in the search mechanism, fixation durations were adjusted to process directly the currently fixated element(s).
APA, Harvard, Vancouver, ISO, and other styles
5

Jones, Stephanie A. H., and Denise Y. P. Henriques. "Memory for proprioceptive and multisensory targets is partially coded relative to gaze." Neuropsychologia 48, no. 13 (November 2010): 3782–92. http://dx.doi.org/10.1016/j.neuropsychologia.2010.10.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Calder, Andrew J., Rob Jenkins, Anneli Cassel, and Colin W. G. Clifford. "Visual representation of eye gaze is coded by a nonopponent multichannel system." Journal of Experimental Psychology: General 137, no. 2 (2008): 244–61. http://dx.doi.org/10.1037/0096-3445.137.2.244.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Rai, Yashas, and Patrick Le Callet. "Do gaze disruptions indicate the perceived quality of non-uniformly coded natural scenes?" Electronic Imaging 2017, no. 14 (January 29, 2017): 104–9. http://dx.doi.org/10.2352/issn.2470-1173.2017.14.hvei-124.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Krombholc, Viktorija. "WITH SHINING EYES: WATCHING AND BEING WATCHED IN SARAH WATERS’S "TIPPING THE VELVET"." Годишњак Филозофског факултета у Новом Саду 40, no. 1 (December 10, 2015): 151. http://dx.doi.org/10.19090/gff.2015.1.151-162.

Full text
Abstract:
The aim of this paper is to explore the dynamics of looking and being looked at in Sarah Waters’s Tipping the Velvet. The analysis is theoretically framed by feminist film theory and the concept of the male gaze. According to Laura Mulvey, classic narrative cinema reflects social views on sexual difference and reaffirms the active male/passive female binary. The novel raises the issue of what happens with the gaze when the protagonists are non- heteronormative, a question further made complex by the theme of cross-dressing, which destabilizes visual gender coding and makes it unreliable. The female narrator is infatuated with a male impersonator only to become one herself, and the visual interaction that spurs their sexual relationship on does not fit neatly into Mulvey’s analysis, as both the bearer of the gaze and its object are female, a woman coded as masculine. The male gaze is further deconstructed as the main female character becomes a prostitute, passing for male and working with male clients. Finally, the novel questions the controlling aspect of the gaze implicit in Mulvey’s essay, as the gaze is reimagined as a potential source of power to be desired and invited.
APA, Harvard, Vancouver, ISO, and other styles
9

McDonough, Kim, Dustin Crowther, Paula Kielstra, and Pavel Trofimovich. "Exploring the potential relationship between eye gaze and English L2 speakers’ responses to recasts." Second Language Research 31, no. 4 (June 3, 2015): 563–75. http://dx.doi.org/10.1177/0267658315589656.

Full text
Abstract:
This exploratory study investigated whether joint attention through eye gaze was predictive of second language (L2) speakers’ responses to recasts. L2 English learners ( N = 20) carried out communicative tasks with research assistants who provided feedback in response to non-targetlike (non-TL) forms. Their interaction was audio-recorded and their eye gaze behavior was tracked simultaneously using the faceLAB system. Transcripts were coded for characteristics of the feedback episodes (linguistic target, feedback type, intonation, prosody) and types of response (no opportunity, no reformulation, non-TL response, TL response). Eye gaze length for the researcher (when producing the feedback move) and the L2 speaker (when responding to feedback) were obtained in seconds using Captiv software. Following data pruning to reduce the data set to clausal recasts in response to grammatical errors, a logistic regression model revealed that both L2 speaker and mutual eye gaze were predictive of TL responses. Methodological issues for eye-tracking research during L2 interaction are provided, and suggestions for future research are discussed.
APA, Harvard, Vancouver, ISO, and other styles
10

Carnevale, Michael J., Lisa M. Pritchett, and Laurence R. Harris. "The effect of eccentric gaze on tactile localization on areas of the body that cannot be seen." Seeing and Perceiving 25 (2012): 103. http://dx.doi.org/10.1163/187847612x647351.

Full text
Abstract:
Eccentric gaze systematically biases touch localization on the arm and waist. These perceptual errors suggest that touch location is at least partially coded in a visual reference frame. Here we investigated whether touches to non-visible parts of the body are also affected by gaze position. If so, can the direction of mislocalization tell us how they are laid out in the visual representation? To test this, an array of vibro-tactors was attached to either the lower back or the forehead. During trials, participants were guided to orient the position of their head (90° left, right or straight ahead for touches on the lower back) or head and eyes (combination of ±15° left, right or straight ahead head and eye positions for touches on the forehead) using LED fixation targets and a head mounted laser. Participants then re-oriented to straight ahead and reported perceived touch location on a visual scale using a mouse and computer screen. Similar to earlier experiments on the arm and waist, perceived touch location on the forehead and lower back was biased in the same direction as eccentric head and eye position. This is evidence that perceived touch location is at least partially coded in a visual reference frame even for parts of the body that are not typically seen.
APA, Harvard, Vancouver, ISO, and other styles
11

Lee, Joonbum, Mauricio Muñoz, Lex Fridman, Trent Victor, Bryan Reimer, and Bruce Mehler. "Investigating the correspondence between driver head position and glance location." PeerJ Computer Science 4 (February 19, 2018): e146. http://dx.doi.org/10.7717/peerj-cs.146.

Full text
Abstract:
The relationship between a driver’s glance orientation and corresponding head rotation is highly complex due to its nonlinear dependence on the individual, task, and driving context. This paper presents expanded analytic detail and findings from an effort that explored the ability of head pose to serve as an estimator for driver gaze by connecting head rotation data with manually coded gaze region data using both a statistical analysis approach and a predictive (i.e., machine learning) approach. For the latter, classification accuracy increased as visual angles between two glance locations increased. In other words, the greater the shift in gaze, the higher the accuracy of classification. This is an intuitive but important concept that we make explicit through our analysis. The highest accuracy achieved was 83% using the method of Hidden Markov Models (HMM) for the binary gaze classification problem of (a) glances to the forward roadway versus (b) glances to the center stack. Results suggest that although there are individual differences in head-glance correspondence while driving, classifier models based on head-rotation data may be robust to these differences and therefore can serve as reasonable estimators for glance location. The results suggest that driver head pose can be used as a surrogate for eye gaze in several key conditions including the identification of high-eccentricity glances. Inexpensive driver head pose tracking may be a key element in detection systems developed to mitigate driver distraction and inattention.
APA, Harvard, Vancouver, ISO, and other styles
12

Jensen, Kelly, Sassan Noazin, Leandra Bitterfeld, Andrea Carcelen, Natalia I. Vargas-Cuentas, Daniela Hidalgo, Alejandra Valenzuela, et al. "Autism Detection in Children by Combined Use of Gaze Preference and the M-CHAT-R in a Resource-Scarce Setting." Journal of Autism and Developmental Disorders 51, no. 3 (February 16, 2021): 994–1006. http://dx.doi.org/10.1007/s10803-021-04878-0.

Full text
Abstract:
AbstractMost children with autism spectrum disorder (ASD), in resource-limited settings (RLS), are diagnosed after the age of four. Our work confirmed and extended results of Pierce that eye tracking could discriminate between typically developing (TD) children and those with ASD. We demonstrated the initial 15 s was at least as discriminating as the entire video. We evaluated the GP-MCHAT-R, which combines the first 15 s of manually-coded gaze preference (GP) video with M-CHAT-R results on 73 TD children and 28 children with ASD, 36–99 months of age. The GP-MCHAT-R (AUC = 0.89 (95%CI: 0.82–0.95)), performed significantly better than the MCHAT-R (AUC = 0.78 (95%CI: 0.71–0.85)) and gaze preference (AUC = 0.76 (95%CI: 0.64–0.88)) alone. This tool may enable early screening for ASD in RLS.
APA, Harvard, Vancouver, ISO, and other styles
13

Constantin, Alina G., Hongying Wang, and J. Douglas Crawford. "Role of Superior Colliculus in Adaptive Eye–Head Coordination During Gaze Shifts." Journal of Neurophysiology 92, no. 4 (October 2004): 2168–84. http://dx.doi.org/10.1152/jn.00103.2004.

Full text
Abstract:
The goal of this study was to determine which aspects of adaptive eye–head coordination are implemented upstream or downstream from the motor output layers of the superior colliculus (SC). Two monkeys were trained to perform head-free gaze shifts while looking through a 10° aperture in opaque, head-fixed goggles. This training produced context-dependent alterations in eye–head coordination, including a coordinated pattern of saccade–vestibuloocular reflex (VOR) eye movements that caused eye position to converge toward the aperture, and an increased contribution of head movement to the gaze shift. One would expect the adaptations that were implemented downstream from the SC to be preserved in gaze shifts evoked by SC stimulation. To test this, we analyzed gaze shifts evoked from 19 SC sites in monkey 1 and 38 sites in monkey 2, both with and without goggles. We found no evidence that the goggle paradigm altered the basic gaze position–dependent spatial coding of the evoked movements (i.e., gaze was still coded in an eye-centered frame). However, several aspects of the context-dependent coordination strategy were preserved during stimulation, including the adaptive convergence of final eye position toward the goggles aperture, and the position-dependent patterns of eye and head movement required to achieve this. For example, when initial eye position was offset from the learned aperture location at the time of stimulation, a coordinated saccade–VOR eye movement drove it back to the original aperture, and the head compensated to preserve gaze kinematics. Some adapted amplitude–velocity relationships in eye, gaze, and head movement also may have been preserved. In contrast, context-dependent changes in overall eye and head contribution to gaze amplitude were not preserved during SC stimulation. We conclude that 1) the motor output command from the SC to the brain stem can be adapted to produce different position-dependent coordination strategies for different behavioral contexts, particularly for eye-in-head position, but 2) these brain stem coordination mechanisms implement only the default (normal) level of head amplitude contribution to the gaze shift. We propose that a parallel cortical drive, absent during SC stimulation, is required to adjust the overall head contribution for different behavioral contexts.
APA, Harvard, Vancouver, ISO, and other styles
14

Davidson, Judy, and Michelle Helstein. "Queering the Gaze: Calgary Hockey Breasts, Dynamics of Desire, and Colonial Hauntings." Sociology of Sport Journal 33, no. 4 (December 2016): 282–93. http://dx.doi.org/10.1123/ssj.2016-0011.

Full text
Abstract:
This paper compares two hockey-related breast-flashing events that occurred in Calgary, Alberta, Canada. The first was performed by Calgary Flames fans, the ‘Flamesgirls’, in the 2004 NHL Stanley Cup final, and the second flashing event occurred when members and fans of the Booby Orr hockey team participated in lifting their shirts and jerseys at a lesbian hockey tournament at the 2007 Outgames/Western Cup held in Calgary. We deploy an analysis of visual psychic economies to highlight psychoanalytic framings of masculinized and feminized subject positions in both heteronormative and lesbigay-coded sporting spaces. We suggest there is a queer twist to the Booby Orr flashing context, which we read as disruptive and potentially resistive. The paper ends by turning to Avery Gordon’s (1997) Ghostly Matters, to consider how even in its queer transgression, the Booby Orr flashing scene is simultaneously haunted and saturated by the absent presence of colonial technologies of visuality and sexual violence. It is argued that in this case, openings for transgressive gender dynamics might be imaginable—even as those logics themselves are disciplined and perhaps made possible through racialized colonial framings of appropriate desire.
APA, Harvard, Vancouver, ISO, and other styles
15

Ramsdell-Hudock, Heather L., Andrew Stuart, and Douglas F. Parham. "Utterance Duration as It Relates to Communicative Variables in Infant Vocal Development." Journal of Speech, Language, and Hearing Research 61, no. 2 (February 15, 2018): 246–56. http://dx.doi.org/10.1044/2017_jslhr-s-17-0117.

Full text
Abstract:
Purpose We aimed to provide novel information on utterance duration as it relates to vocal type, facial affect, gaze direction, and age in the prelinguistic/early linguistic infant. Method Infant utterances were analyzed from longitudinal recordings of 15 infants at 8, 10, 12, 14, and 16 months of age. Utterance durations were measured and coded for vocal type (i.e., squeal, growl, raspberry, vowel, cry, laugh), facial affect (i.e., positive, negative, neutral), and gaze direction (i.e., to person, to mirror, or not directed). Results Of the 18,236 utterances analyzed, durations were typically shortest at 14 months of age and longest at 16 months of age. Statistically significant changes were observed in utterance durations across age for all variables of interest. Conclusion Despite variation in duration of infant utterances, developmental patterns were observed. For these infants, utterance durations appear to become more consolidated later in development, after the 1st year of life. Indeed, 12 months is often noted as the typical age of onset for 1st words and might possibly be a point in time when utterance durations begin to show patterns across communicative variables.
APA, Harvard, Vancouver, ISO, and other styles
16

Munoz, D. P., D. Guitton, and D. Pelisson. "Control of orienting gaze shifts by the tectoreticulospinal system in the head-free cat. III. Spatiotemporal characteristics of phasic motor discharges." Journal of Neurophysiology 66, no. 5 (November 1, 1991): 1642–66. http://dx.doi.org/10.1152/jn.1991.66.5.1642.

Full text
Abstract:
1. In this paper we describe the movement-related discharges of tectoreticular and tectoreticulospinal neurons [together called TR (S) Ns] that were recorded in the superior colliculus (SC) of alert cats trained to generate orienting movements in various behavioral situations; the cats' heads were either completely unrestrained (head free) or immobilized (head fixed). TR (S) Ns are organized into a retinotopically coded motor map. These cells can be divided into two groups, fixation TR (S) Ns [f TR (S) Ns] and orientation TR (S) Ns [oTR(S)Ns], depending on whether they are located, respectively, within or outside the zero (or area centralis) representation of the motor map in the rostral SC. 2. oTR(S)Ns discharged phasic motor bursts immediately before the onset of gaze shifts in both the head-free and head-fixed conditions. Ninety-five percent of the oTR(S)Ns tested (62/65) increased their rate of discharge before a visually triggered gaze shift, the amplitude and direction of which matched the cell's preferred movement vector. For movements along the optimal direction, each cell produced a burst discharge for gaze shifts of all amplitudes equal to or greater than the optimum. Hence, oTR(S)Ns had no distal limit to their movement fields. The timing of the burst relative to the onset of the gaze shift, however, depended on gaze shift amplitude: each TR(S)N reached its peak discharge when the instantaneous position of the visual axis relative to the target (i.e., instantaneous gaze motor error) matched the cell's optimal vector, regardless of the overall amplitude of the movement. 3. The intensity of the movement-related burst discharge depended on the behavioral context. For the same vector, the movement-related increase in firing was greatest for visually triggered movements and less pronounced when the cat oriented to a predicted target, a condition in which only 76% of the cells tested (35/46) increased their discharge rate. The weakest movement-related discharges were associated with spontaneous gaze shifts. 4. For some oTR(S)Ns, the average firing frequency in the movement-related burst was correlated to the peak velocity of the movement trajectory in both head-fixed and head-free conditions. Typically, when the head was unrestrained, the correlation to peak gaze velocity was better than that to either peak eye or head velocity alone. 5. Gaze shifts triggered by a high-frequency train of collicular microstimulation had greater peak velocities than comparable amplitude movements elicited by a low-frequency train of stimulation.(ABSTRACT TRUNCATED AT 400 WORDS)
APA, Harvard, Vancouver, ISO, and other styles
17

Lerna, A., D. Esposito, L. Russo, and A. Massagli. "The Efficacy of the PECS for Improving the Communicative, Relational and Social Skills in Children with Autistic Disorder: Preliminary Results." European Psychiatry 24, S1 (January 2009): 1. http://dx.doi.org/10.1016/s0924-9338(09)71177-0.

Full text
Abstract:
The aim in the current study was to investigate the efficacy of the PECS (Picture Exchange Communication System) in a sample of children with Autistic Disorder (AD) in the development of the communication, alternating gaze and pointing in children with Autistic Disorder (AD).The sample included 5 children diagnosed with AD (DSM-IV-TR), no verbal language, followed by the team of the Rehabilitation Centre belonging to Scientific Institute “E. Medea”, Association “La Nostra Famiglia” Branch of Ostuni (Italy). The children were tested on their ability with neuropsychiatric, psycholinguistics and psychological assessment at the pre and post of the trial.The treatment PECS has gone on for two years with a frequency of three times a week (45 minutes each one).The observed behavioral variables were: spontaneous request of objects using the notebook PECS, alternating gaze, pointing, and possible vocalizing and verbalizing on imitation.The data were collected at the beginning and at the end of the trial, using play-interaction videotapes, lasting 20 minutes each. Segments (10 min) of videos were randomly selected for coding.The behaviours were coded using the Observer XT. 7 and the results were analyzed statistically with the SPSS programme.The results show a significant increase in the number of spontaneous requests, in the capacity of alternating gaze, pointing, vocalizing and verbalizing on imitation.Finally the PECS seems to allow not only to develop a functional communication in AD, but also to increase social communicative behaviours of children with AD. Nevertheless further studies are necessary.
APA, Harvard, Vancouver, ISO, and other styles
18

De Meyer, Kris, and Michael W. Spratling. "Multiplicative Gain Modulation Arises Through Unsupervised Learning in a Predictive Coding Model of Cortical Function." Neural Computation 23, no. 6 (June 2011): 1536–67. http://dx.doi.org/10.1162/neco_a_00130.

Full text
Abstract:
The combination of two or more population-coded signals in a neural model of predictive coding can give rise to multiplicative gain modulation in the response properties of individual neurons. Synaptic weights generating these multiplicative response properties can be learned using an unsupervised, Hebbian learning rule. The behavior of the model is compared to empirical data on gaze-dependent gain modulation of cortical cells and found to be in good agreement with a range of physiological observations. Furthermore, it is demonstrated that the model can learn to represent a set of basis functions. This letter thus connects an often-observed neurophysiological phenomenon and important neurocomputational principle (gain modulation) with an influential theory of brain operation (predictive coding).
APA, Harvard, Vancouver, ISO, and other styles
19

Mooshagian, Eric, and Lawrence H. Snyder. "Spatial eye–hand coordination during bimanual reaching is not systematically coded in either LIP or PRR." Proceedings of the National Academy of Sciences 115, no. 16 (April 2, 2018): E3817—E3826. http://dx.doi.org/10.1073/pnas.1718267115.

Full text
Abstract:
We often orient to where we are about to reach. Spatial and temporal correlations in eye and arm movements may depend on the posterior parietal cortex (PPC). Spatial representations of saccade and reach goals preferentially activate cells in the lateral intraparietal area (LIP) and the parietal reach region (PRR), respectively. With unimanual reaches, eye and arm movement patterns are highly stereotyped. This makes it difficult to study the neural circuits involved in coordination. Here, we employ bimanual reaching to two different targets. Animals naturally make a saccade first to one target and then the other, resulting in different patterns of limb–gaze coordination on different trials. Remarkably, neither LIP nor PRR cells code which target the eyes will move to first. These results suggest that the parietal cortex plays at best only a permissive role in some aspects of eye–hand coordination and makes the role of LIP in saccade generation unclear.
APA, Harvard, Vancouver, ISO, and other styles
20

Pritchett, Lisa M., Michael J. Carnevale, and Laurence R. Harris. "Body and gaze centered coding of touch locations during a dynamic task." Seeing and Perceiving 25 (2012): 195. http://dx.doi.org/10.1163/187847612x648242.

Full text
Abstract:
We have previously reported that head position affects the perceived location of touch differently depending on the dynamics of the task the subject is involved in. When touch was delivered and responses were made with head rotated touch location shifted in the opposite direction to the head position, consistent with body-centered coding. When touch was delivered with head rotated but response was made with head centered touch shifted in the same direction as the head, consistent with gaze-centered coding. Here we tested whether moving the head in-between touch and response would modulate the effects of head position on touch location. Each trial consisted of three periods, in the first arrows and LEDs guided the subject to a randomly chosen head orientation (90° left, right, or center) and a vibration stimulus was delivered. Next, they were either guided to turn their head or to remain in the same location. In the final period they again were guided to turn or to remain in the same location before reporting the perceived location of the touch on a visual scale using a mouse and computer screen. Reported touch location was shifted in the opposite direction of head orientation during touch presentation regardless of the orientation during response or whether a movement was made before the response. The size of the effect was much reduced compared to our previous results. These results are consistent with touch location being coded in both a gaze centered and body centered reference frame during dynamic conditions.
APA, Harvard, Vancouver, ISO, and other styles
21

Munoz, D. P., and D. Guitton. "Control of orienting gaze shifts by the tectoreticulospinal system in the head-free cat. II. Sustained discharges during motor preparation and fixation." Journal of Neurophysiology 66, no. 5 (November 1, 1991): 1624–41. http://dx.doi.org/10.1152/jn.1991.66.5.1624.

Full text
Abstract:
1. We recorded from electrophysiologically identified output neurons of the superior colliculus (SC)--tectoreticular and tectoreticulospinal neurons [together called TR(S)Ns]--in the alert cat with head either unrestrained or immobilized. A cat actively exploring its visual surrounds typically makes a series of coordinated eye-head orienting movements that rapidly shift the visual axis from one point to another. These single-step shifts in gaze position (gaze = eye-in-space = eye-in-head + head-in-space) are separated by periods in which the visual axis remains stationary with respect to surrounding space. 2. Eighty-seven percent (86/99) of the TR(S)Ns studied during periods when the visual axis was stationary presented a sustained discharge, the intensity of which depended on the magnitude and direction of the vector drawn between current gaze position and the gaze position required to fixate a target of interest (gaze position error or GPE). The maximum sustained discharge recorded from each TR(S)N corresponded to a specific GPE vector and was correlated with the cell's position on the SC's retinotopically coded motor map. 3. The 86 TR(S)Ns could be divided into two classes. ,Fixation TR(S)Ns- [fTR(S)Ns, n = 12] discharged maximally when the animal attentively fixated a target of interest, (i.e. GPE = 0 degrees). These neurons were located in the rostral SC and had visual receptive fields that included a representation of the area centralis. “Orientation TR(S)Ns” [oTR(S)Ns, n = 62] had visual receptive fields that excluded the area centralis and discharged for nonzero GPEs. The oTR(S)Ns were recorded more caudally on the SC's map. 4. For a given value of GPE, an ensemble of TR(S)Ns was active. When the cat changed its gaze position relative to a fixed target of interest, the zone of sustained activity shifted to a new collicular site. Thus, to maintain the maximum sustained discharge of a TR(S)N when target position was changed relative to the fixed body, it was necessary that gaze move to a new position that reestablished the preferred GPE. 5. The areal extent of GPEs for which a TR(S)N discharged defined a gaze position error field (GPEF) that was approximately coaligned with the cell's visual receptive field. The maximum sustained discharge occurred when GPE corresponded approximately to the center of the cell's GPEF. 6. The diameter of a TR(S)N's GPEF was related to the magnitude of that cell's optimal GPE. fTR(S)Ns had the smallest GPEFs, approximately 15-20 degrees; GPEF diameter was larger for oTR(S)Ns.(ABSTRACT TRUNCATED AT 400 WORDS)
APA, Harvard, Vancouver, ISO, and other styles
22

Schwichtenberg, AJ, Ashleigh M. Kellerman, Gregory S. Young, Meghan Miller, and Sally Ozonoff. "Mothers of children with autism spectrum disorders: Play behaviors with infant siblings and social responsiveness." Autism 23, no. 4 (June 28, 2018): 821–33. http://dx.doi.org/10.1177/1362361318782220.

Full text
Abstract:
Mother–infant interactions are a proximal process in early development and may be especially salient for children who are at risk for social difficulties (i.e. infant siblings of children with autism spectrum disorder). To inform how indices of maternal behaviors may improve parent-mediated interventions designed to mitigate autism spectrum disorder risk, the present study explored maternal social responsiveness ratings and social behaviors during dyadic play interactions. Dyads were recruited from families with at least one older child with autism spectrum disorder (high-risk group, n = 90) or families with no history of autism spectrum disorder (low-risk group, n = 62). As part of a prospective study, interactions were coded when infant siblings were 6, 9, and 12 months of age, for gaze, affect, vocalizations, and multimodal bids or responses (i.e. social smiles). Maternal social responsiveness was indexed via the Social Responsiveness Scale. Mothers in both risk groups had comparable Social Responsiveness Scale scores and social behaviors during play. Two maternal behaviors emerged as positive correlates of infant social behaviors and are thus of high relevance to parent-mediated interventions. Specifically, more maternal positive affect and the use of multimodal bids or responses were associated with more infant positive affect, vocalizations, gaze to face, and multimodal bids or responses.
APA, Harvard, Vancouver, ISO, and other styles
23

Masataka, Nobuo. "The relation between index-finger extension and the acoustic quality of cooing in three-month-old infants." Journal of Child Language 22, no. 2 (June 1995): 247–57. http://dx.doi.org/10.1017/s0305000900009776.

Full text
Abstract:
ABSTRACTFourteen full-term, healthy, three-month-old infants were observed during a total of 15 minutes spontaneous face-to-face interaction with their mothers. Facial and manual actions, gaze direction and vocalizations were coded. The infants' cooing vocalizations were categorized into syllabic and vocalic sounds. Index-finger extension occurred frequently in sequence with syllabic sounds, which are speech-like vocalizations, but rarely occurred in sequence with vocalic sounds. No other categories of nonvocal behaviours showed such a relationship. In a subsequent experiment, the infants experienced either conversational turn-taking or random responsiveness from their mothers. In the turn-taking condition, the infants produced a higher ratio of syllabic to vocalic sounds, and a higher frequency of index-finger extension. These results suggest a strong connection between speech and the pointing gesture long before the infant can actually talk.
APA, Harvard, Vancouver, ISO, and other styles
24

O'Hanlon, Ann, Therese Mendez, and Melissa Morrissette. "Gender Codes and Aging: Comparison of Features in Two Women's Magazines." Innovation in Aging 4, Supplement_1 (December 1, 2020): 324. http://dx.doi.org/10.1093/geroni/igaa057.1039.

Full text
Abstract:
Abstract Magazines and other media promote beauty standards and gender roles in feature articles and advertising. Publications present idealized images of women often in contrast to the average reader’s appearance. Analyses of such images suggest gender roles are reinforced through subtle cues embedded in hand gestures, eye gaze, head posture, and body position (Goffman, 1976). This study analyzed a recurrent feature in two different magazine presenting an idealized standard of aging to mature women. The first magazine, MORE, featured mature women, typically between the ages of 40 and 60, with the banner of “This is what (woman’s age) looks like.” MORE magazine is no longer in press, but another magazine, Women’s Day, began a similar recurrent column featuring a women between 40 and 60 with the title “Own Your Age—Yes, I am (women’s age).” Both features included copy describing the woman’s perspective on life and aging and a listing of specific beauty products that she uses. These features were analyzed as advertisements, because they promote a message about being a woman of a certain age and the specific products used to achieve that look. Three researchers coded 43 images from MORE magazine and 30 images from Woman’s Day for physical characteristics of aging and evidence of Goffman’s gender codes. Most photos presented women who appeared younger than their stated age. Images showed the presence of Goffman’s gender codes including feminine touch, ritualization of subordination, licensed withdrawal, and infantilization and were more prevalent in the MORE feature than Woman’s Day column.
APA, Harvard, Vancouver, ISO, and other styles
25

Pillai, Meena T. "‘Camera Obscura’ to ‘Camera Dentata’: Women Directors and the Politics of Gender in Malayalam Cinema." BioScope: South Asian Screen Studies 11, no. 1 (June 2020): 44–60. http://dx.doi.org/10.1177/0974927620939330.

Full text
Abstract:
This article examines women directors in Malayalam cinema as historical subjects, looking at the manner in which they place themselves within Kerala’s cultural semiotics and its popular imaginary, disrupting or legitimising an illusion coded to the measure of gender desires and differences within its semiosphere. The logic of commercial cinema demands that women directors fall in sync with the representative politics of the male gaze and a capitalist libidinal economy, seducing women into passive codes of femininity and aligning men within the registers of a hegemonic masculinity, in effect foreclosing the play of alternative languages of desire. Malayalam cinema has had two kinds of women directors, one who tries to puncture this logic from within the male bastions of popular cinema, and the second who strives to be an ‘other’ to the mythmakers of the phallic order. The article attempts to read the first mode of intervention using the Marxian specular metaphor of the camera obscura as a hierarchical apparatus of ideological inversion where the real is substituted by a spectacle of the illusory. To analyse the latter, the article puts forward the metaphor of camera dentata – that modus of representation which seeks to topple the patriarchal and capitalist ideological predispositions of the cinematic apparatus, thus rendering it capable of diminishing the power of phallic signifiers and ‘the moral panics of sexuality’ they engender.
APA, Harvard, Vancouver, ISO, and other styles
26

Lai, Philip To. "Expressivity in children with autism and Williams syndrome." Advances in Autism 6, no. 4 (June 30, 2020): 277–88. http://dx.doi.org/10.1108/aia-11-2019-0044.

Full text
Abstract:
Purpose The purpose of this study is to investigate the social and affective aspects of communication in school-age children with HFA and school-age children with WS using a micro-analytic approach. Social communication is important for success at home, school, work and in the community. Lacking the ability to effectively process and convey information can lead to deficits in social communication. Individuals with high functioning autism (HFA) and individuals with Williams syndrome (WS) often have significant impairments in social communication that impact their relationships with others. Currently, little is known about how school-age children use and integrate verbal and non-verbal behaviors in the context of a social interaction. Design/methodology/approach A micro-analytic coding scheme was devised to reveal which channels children use to convey information. Language, eye gaze behaviors and facial expressions of the child were coded during this dyadic social interaction. These behaviors were coded throughout the entire interview, as well as when the child was the speaker and when the child was the listener. Findings Language results continue to pose problems for the HFA and WS groups compared to their typically developing (TD) peers. For non-verbal communicative behaviors, a qualitative difference in the use of eye gaze was found between the HFA and WS groups. For facial expression, the WS and TD groups produced more facial expressions than the HFA group. Research limitations/implications No differences were observed in the HFA group when playing different roles in a conversation, suggesting they are not as sensitive to the social rules of a conversation as their peers. Insights from this study add knowledge toward understanding social-communicative development in school-age children. Originality/value In this study, two non-verbal behaviors will be assessed in multiple contexts: the entire biographical interview, when the child is the speaker and when the child is the listener. These social and expressive measures give an indication of how expressive school-age children are and provide information on their attention, affective state and communication skills when conversing with an adult. Insights from this study will add knowledge toward understanding social-communicative development in school-age children.
APA, Harvard, Vancouver, ISO, and other styles
27

Peebles, Stacey. "Lines of Sight: Watching War in Jarhead and My War: Killing Time in Iraq." PMLA/Publications of the Modern Language Association of America 124, no. 5 (October 2009): 1662–76. http://dx.doi.org/10.1632/pmla.2009.124.5.1662.

Full text
Abstract:
Jarhead, Anthony Swofford's 2003 memoir of the Persian Gulf War, and My War: Killing Time in Iraq, Colby Buzzell's 2005 memoir of the Iraq War, emphasize the authors' voyeuristic delight in watching war movies before and during their military service. What follows their enthusiastic consumption of “military pornography,” however, is a crisis of nonidentification and a lingering uncertainty about the significance of war in their own lives. Swofford and Buzzell find that the gaze they initially wielded is turned on them, and in response Swofford roils with sexually coded anger and frustration while Buzzell chooses to amplify his exposure by starting a blog. The two memoirs, then, provide a compelling account of the relation between changing technologies of representation and the experience of postmodern war. These lines of sight, all targeting the spectacle of combat, reveal the contemporary intersections among war, media, and agency.
APA, Harvard, Vancouver, ISO, and other styles
28

Newlands, Shawn D., and Min Wei. "Tests of linearity in the responses of eye-movement-sensitive vestibular neurons to sinusoidal yaw rotation." Journal of Neurophysiology 109, no. 10 (May 15, 2013): 2571–84. http://dx.doi.org/10.1152/jn.00930.2012.

Full text
Abstract:
The rotational vestibulo-ocular reflex in primates is linear and stabilizes gaze in space over a large range of head movements. Best evidence suggests that position-vestibular-pause (PVP) and eye-head velocity (EHV) neurons in the vestibular nuclei are the primary mediators of vestibulo-ocular reflexes for rotational head movements, yet the linearity of these neurons has not been extensively tested. The current study was undertaken to understand how varying magnitudes of yaw rotation are coded in these neurons. Sixty-six PVP and 41 EHV neurons in the rostral vestibular nuclei of 7 awake rhesus macaques were recorded over a range of frequencies (0.1 to 2 Hz) and peak velocities (7.5 to 210°/s at 0.5 Hz). The sensitivity (gain) of the neurons decreased with increasing peak velocity of rotation for all PVP neurons and EHV neurons sensitive to ipsilateral rotation (type I). The sensitivity of contralateral rotation-sensitive (type II) EHV neurons did not significantly decrease with increasing peak velocity. These data show that, like non-eye-movement-related vestibular nuclear neurons that are believed to mediate nonlinear vestibular functions, PVP neurons involved in the linear vestibulo-ocular reflex also behave in a nonlinear fashion. Similar to other sensory nuclei, the magnitude of the vestibular stimulus is not linearly coded by the responses of vestibular neurons; rather, amplitude compression extends the dynamic range of PVP and type I EHV vestibular neurons.
APA, Harvard, Vancouver, ISO, and other styles
29

Hsieh, Yu-Hsin, Maria Borgestig, Deepika Gopalarao, Joy McGowan, Mats Granlund, Ai-Wen Hwang, and Helena Hemmingsson. "Communicative Interaction with and without Eye-Gaze Technology between Children and Youths with Complex Needs and Their Communication Partners." International Journal of Environmental Research and Public Health 18, no. 10 (May 12, 2021): 5134. http://dx.doi.org/10.3390/ijerph18105134.

Full text
Abstract:
Use of eye-gaze assistive technology (EGAT) provides children/youths with severe motor and speech impairments communication opportunities by using eyes to control a communication interface on a computer. However, knowledge about how using EGAT contributes to communication and influences dyadic interaction remains limited. Aim: By video-coding dyadic interaction sequences, this study investigates the impacts of employing EGAT, compared to the Non-EGAT condition on the dyadic communicative interaction. Method: Participants were six dyads with children/youths aged 4–19 years having severe physical disabilities and complex communication needs. A total of 12 film clips of dyadic communication activities with and without EGAT in natural contexts were included. Based on a systematic coding scheme, dyadic communication behaviors were coded to determine the interactional structure and communicative functions. Data were analyzed using a three-tiered method combining group and individual analysis. Results: When using EGAT, children/youths increased initiations in communicative interactions and tended to provide more information, while communication partners made fewer communicative turns, initiations, and requests compared to the Non-EGAT condition. Communication activities, eye-control skills, and communication abilities could influence dyadic interaction. Conclusion: Use of EGAT shows potential to support communicative interaction by increasing children’s initiations and intelligibility, and facilitating symmetrical communication between dyads.
APA, Harvard, Vancouver, ISO, and other styles
30

SCHMIDT, CHRIS L., and KATHARINE R. LAWSON. "Caregiver attention-focusing and children's attention-sharing behaviours as predictors of later verbal IQ in very low birthweight children." Journal of Child Language 29, no. 1 (February 2002): 3–22. http://dx.doi.org/10.1017/s0305000901004913.

Full text
Abstract:
Specific relationships between verbal and nonverbal aspects of caregiver attention-focusing events and later verbal IQ were investigated for a risk sample of 26 very low birthweight [VLBW], preterm [PT] children. Videotaped interactions between VLBW, PT children at 2;0 and their caregivers were coded for caregiver attention-focusing speech and/or caregiver attention-focusing gestures (display, demonstration and pointing), caregiver gesture–speech combinations, and for child attention-sharing through gesture and social gaze. To investigate the specific effects of caregiver and child interactional factors, analyses statistically controlled for cognitive status. Simultaneous multiple regression analyses found that overall caregiver attention-focusing involving gesture, child attention-sharing behaviours, and cognitive status each significantly and uniquely contributed to verbal IQ at 3;0. Further analyses contrasted the contributions of caregiver gesture with relevant descriptive speech, caregiver gesture with no speech/nondescriptive speech, and caregiver pointing. Results of these analyses suggest that caregiver gesture with relevant descriptive speech makes a unique and positive contribution to later language performance.
APA, Harvard, Vancouver, ISO, and other styles
31

Matsuda, Noriyuki, and Haruhiko Takeuchi. "Frequent Pattern Mining of Eye-Tracking Records Partitioned into Cognitive Chunks." Applied Computational Intelligence and Soft Computing 2014 (2014): 1–8. http://dx.doi.org/10.1155/2014/101642.

Full text
Abstract:
Assuming that scenes would be visually scanned by chunking information, we partitioned fixation sequences of web page viewers into chunks using isolate gaze point(s) as the delimiter. Fixations were coded in terms of the segments in a5×5mesh imposed on the screen. The identified chunks were mostly short, consisting of one or two fixations. These were analyzed with respect to the within- and between-chunk distances in the overall records and the patterns (i.e., subsequences) frequently shared among the records. Although the two types of distances were both dominated by zero- and one-block shifts, the primacy of the modal shifts was less prominent between chunks than within them. The lower primacy was compensated by the longer shifts. The patterns frequently extracted at three threshold levels were mostly simple, consisting of one or two chunks. The patterns revealed interesting properties as to segment differentiation and the directionality of the attentional shifts.
APA, Harvard, Vancouver, ISO, and other styles
32

Hahn, Laura J., Nancy C. Brady, and Theresa Versaci. "Communicative Use of Triadic Eye Gaze in Children With Down Syndrome, Autism Spectrum Disorder, and Other Intellectual and Developmental Disabilities." American Journal of Speech-Language Pathology 28, no. 4 (November 19, 2019): 1509–22. http://dx.doi.org/10.1044/2019_ajslp-18-0155.

Full text
Abstract:
Purpose This study examines differences in the communicative use of triadic eye gaze (TEG) during a communicative interaction in 2 neurodevelopmental disorders: Down syndrome (DS) and autism spectrum disorders (ASD), and a 3rd group of varying disabilities associated with intellectual and developmental disabilities (IDDs). Also, the relationship between TEG use and language abilities was explored. Method Participants were 45 children, 15 in each group. The frequency of TEG was coded during a scripted communication assessment when children were between 3 and 6 years of age (37–73 months). Receptive and expressive language was measured using raw scores from the Mullen Scales of Early Learning concurrently between 3 and 6 years and again 2 years later when children were between 5 and 8 years (59–92 months). Results Descriptively, children with DS had a higher frequency of TEG than children with ASD and IDD, but significant differences were only observed between children with DS and ASD. More TEG at Time 1 in children with DS was associated with higher receptive language at Time 1 and higher expressive language at Time 2. For children with ASD, a trend for a positive association between TEG at Time 1 and language abilities at Time 2 was observed. No significant associations were observed for children with IDD. Conclusion Children with DS used TEG significantly more than children with ASD in this sample. Identifying strengths and weaknesses in TEG use is important because providing caregiver training to facilitate TEG can result in increased opportunities to respond with language models and promote language development.
APA, Harvard, Vancouver, ISO, and other styles
33

Kaneko, Chris R. S. "Eye Movement Deficits After Ibotenic Acid Lesions of the Nucleus Prepositus Hypoglossi in Monkeys. I. Saccades and Fixation." Journal of Neurophysiology 78, no. 4 (October 1, 1997): 1753–68. http://dx.doi.org/10.1152/jn.1997.78.4.1753.

Full text
Abstract:
Kaneko, Chris R. S. Eye movement deficits after ibotenic acid lesions of the nucleus prepositus hypoglossi in monkeys. I. Saccades and fixation. J. Neurophysiol. 78: 1753–1768, 1997. It has been suggested that the function of the nucleus prepositus hypoglossi (nph) is the mathematical integration of velocity-coded signals to produce position-coded commands that drive abducens motoneurons and generate horizontal eye movements. In early models of the saccadic system, a single integrator provided not only the signal that maintained steady gaze after a saccade but also an efference copy of eye position, which provided a feedback signal to control the dynamics of the saccade. In this study, permanent, serial ibotenic acid lesions were made in the nph of three rhesus macaques, and their effects were studied while the alert monkeys performed a visual tracking task. Localized damage to the nph was confirmed in both Nissl and immunohistochemically stained material. The lesions clearly were correlated with long-lasting deficits in eye movement. The animals' ability to fixate in the dark was compromised quickly and uniformly so that saccades to peripheral locations were followed by postsaccadic centripetal drift. The time constant of the drift decreased to approximately one-tenth of its normal values but remained 10 times longer than that attributable to the mechanics of the eye. In contrast, saccades were affected minimally. The results are more consistent with models of the neural saccade generator that use separate feedback and position integrators than with the classical models, which use a single multipurpose element. Likewise, the data contradict models that rely on feedback from the nph. In addition, they show that the oculomotor neural integrator is not a single neural entity but is most likely distributed among a number of nuclei.
APA, Harvard, Vancouver, ISO, and other styles
34

Kliewer, Mark A., Michael Hartung, and C. Shawn Green. "The Search Patterns of Abdominal Imaging Subspecialists for Abdominal Computed Tomography: Toward a Foundational Pattern for New Radiology Residents." Journal of Clinical Imaging Science 11 (January 9, 2021): 1. http://dx.doi.org/10.25259/jcis_195_2020.

Full text
Abstract:
Objectives: The routine search patterns used by subspecialty abdominal imaging experts to inspect the image volumes of abdominal/pelvic computed tomography (CT) have not been well characterized or rendered in practical or teachable terms. The goal of this study is to describe the search patterns used by experienced subspecialty imagers when reading a normal abdominal CT at a modern picture archiving and communication system workstation, and utilize this information to propose guidelines for residents as they learn to interpret CT during training. Material and Methods: Twenty-two academic subspecialists enacted their routine search pattern on a normal contrast-enhanced abdominal/pelvic CT study under standardized display parameters. Readers were told that the scan was normal and then asked to verbalize where their gaze centered and moved through the axial, coronal, and sagittal image stacks, demonstrating eye position with a cursor as needed. A peer coded the reported eye gaze movements and scrilling behavior. Spearman correlation coefficients were calculated between years of professional experience and the numbers of passes through the lung bases, liver, kidneys, and bowel. Results: All readers followed an initial organ-by-organ approach. Larger organs were examined by drilling, while smaller organs by oscillation or scanning. Search elements were classified as drilling, scanning, oscillation, and scrilling (scan drilling); these categories were parsed as necessary. The greatest variability was found in the examination the body wall and bowel/mesentery. Two modes of scrilling were described, and these classified as roaming and zigzagging. The years of experience of the readers did not correlated to number of passes made through the lung bases, liver, kidneys, or bowel. Conclusion: Subspecialty abdominal radiologists negotiate through the image stacks of an abdominal CT study in broadly similar ways. Collation of the approaches suggests a foundational search pattern for new trainees.
APA, Harvard, Vancouver, ISO, and other styles
35

Kliewer, Mark A., Michael Hartung, and C. Shawn Green. "The Search Patterns of Abdominal Imaging Subspecialists for Abdominal Computed Tomography: Toward a Foundational Pattern for New Radiology Residents." Journal of Clinical Imaging Science 11 (January 9, 2021): 1. http://dx.doi.org/10.25259/jcis_195_2020.

Full text
Abstract:
Objectives: The routine search patterns used by subspecialty abdominal imaging experts to inspect the image volumes of abdominal/pelvic computed tomography (CT) have not been well characterized or rendered in practical or teachable terms. The goal of this study is to describe the search patterns used by experienced subspecialty imagers when reading a normal abdominal CT at a modern picture archiving and communication system workstation, and utilize this information to propose guidelines for residents as they learn to interpret CT during training. Material and Methods: Twenty-two academic subspecialists enacted their routine search pattern on a normal contrast-enhanced abdominal/pelvic CT study under standardized display parameters. Readers were told that the scan was normal and then asked to verbalize where their gaze centered and moved through the axial, coronal, and sagittal image stacks, demonstrating eye position with a cursor as needed. A peer coded the reported eye gaze movements and scrilling behavior. Spearman correlation coefficients were calculated between years of professional experience and the numbers of passes through the lung bases, liver, kidneys, and bowel. Results: All readers followed an initial organ-by-organ approach. Larger organs were examined by drilling, while smaller organs by oscillation or scanning. Search elements were classified as drilling, scanning, oscillation, and scrilling (scan drilling); these categories were parsed as necessary. The greatest variability was found in the examination the body wall and bowel/mesentery. Two modes of scrilling were described, and these classified as roaming and zigzagging. The years of experience of the readers did not correlated to number of passes made through the lung bases, liver, kidneys, or bowel. Conclusion: Subspecialty abdominal radiologists negotiate through the image stacks of an abdominal CT study in broadly similar ways. Collation of the approaches suggests a foundational search pattern for new trainees.
APA, Harvard, Vancouver, ISO, and other styles
36

Bottorff, Joan L. "Development of an Observational Instrument to Study Nurse-Patient Touch." Journal of Nursing Measurement 2, no. 1 (January 1994): 7–24. http://dx.doi.org/10.1891/1061-3749.2.1.7.

Full text
Abstract:
A detailed analysis of videotaped data of nurse-patient interactions (NPIs) was used to identify the structure of NPIs and the important behaviors comprising touching interactions that could serve as a basis for the development of an observational instrument to study nurse-patient touch. The resulting coding system enabled observers to record types of nurse attending, eye gaze, proximity, nurse-patient dialogue, nurse activity, patient condition, presence of others in the room, and characteristics of nurse-patient touch. To evaluate this instrument, two samples of videotaped NPIs were selected and coded by three observers. Acceptable levels of interobserver and intraobserver agreement and observer reliability were established and maintained throughout the coding process, with the exception of two categories (i.e., nurse activity and intensity of touch). The two samples were compared to provide further support for the content validity of the coding system and to evaluate the sensitivity of the measure to the same behaviors in different samples of NPIs. It was concluded that the instrument has potential for use in describing the patterns of touch nurses use in a more comprehensive and detailed way than has been done in the past and that this instrument warrants further study in other clinical contexts.
APA, Harvard, Vancouver, ISO, and other styles
37

Rvachew, Susan, Dahlia Thompson, and Elizabeth Carolan. "Description of Boys and Girls' Nonverbal and Verbal Engagement With Electronic and Paper Books." Journal of Cognitive Education and Psychology 18, no. 2 (October 1, 2019): 212–22. http://dx.doi.org/10.1891/1945-8959.18.2.212.

Full text
Abstract:
Girls often outperform boys on measures of literacy achievement. This gender gap in literacy performance has been observed to be persistent over developmental time, consistent across multiple domains of literacy, and widely spread geographically. It is sometimes suggested that boys' achievement in the literacy domain might be improved by the availability of reading materials that are designed to engage boys' attention, such as electronic books with interactive features. We addressed the question of boys' versus girls' engagement with reading materials by observing 20 small groups of boys or girls interacting with an electronic book and then a paper book. The children's engagement with the book in each case was coded using Noldus software. Engagement was operationalized in terms of eye gaze (looking at book or reading partner vs. elsewhere), handling (i.e., touching or pointing at book or partner with engaging as opposed to prohibitive actions), and verbal behaviors (i.e., reading, paraphrasing, or talking about the book when compared to not talking or off-task talk). Total time and percent time spent engaged with each book was examined by gender. The results revealed greater nonverbal engagement with the ebook compared to the paper book but greater verbal engagement with the paper book compared to the ebook. No differences in engagement by gender were observed however.
APA, Harvard, Vancouver, ISO, and other styles
38

Salvadori, Eliala A., Cristina Colonnesi, Heleen S. Vonk, Frans J. Oort, and Evin Aktar. "Infant Emotional Mimicry of Strangers: Associations with Parent Emotional Mimicry, Parent-Infant Mutual Attention, and Parent Dispositional Affective Empathy." International Journal of Environmental Research and Public Health 18, no. 2 (January 14, 2021): 654. http://dx.doi.org/10.3390/ijerph18020654.

Full text
Abstract:
Emotional mimicry, the tendency to automatically and spontaneously reproduce others’ facial expressions, characterizes human social interactions from infancy onwards. Yet, little is known about the factors modulating its development in the first year of life. This study investigated infant emotional mimicry and its association with parent emotional mimicry, parent-infant mutual attention, and parent dispositional affective empathy. One hundred and seventeen parent-infant dyads (51 six-month-olds, 66 twelve-month-olds) were observed during video presentation of strangers’ happy, sad, angry, and fearful faces. Infant and parent emotional mimicry (i.e., facial expressions valence-congruent to the video) and their mutual attention (i.e., simultaneous gaze at one another) were systematically coded second-by-second. Parent empathy was assessed via self-report. Path models indicated that infant mimicry of happy stimuli was positively and independently associated with parent mimicry and affective empathy, while infant mimicry of sad stimuli was related to longer parent-infant mutual attention. Findings provide new insights into infants’ and parents’ coordination of mimicry and attention during triadic contexts of interactions, endorsing the social-affiliative function of mimicry already present in infancy: emotional mimicry occurs as an automatic parent-infant shared behavior and early manifestation of empathy only when strangers’ emotional displays are positive, and thus perceived as affiliative.
APA, Harvard, Vancouver, ISO, and other styles
39

Slifer, Keith J., Valerie Pulbrook, Adrianna Amari, Natalie Vona-Messersmith, Jeffrey F. Cohn, Zara Ambadar, Melissa Beck, and Rachel Piszczor. "Social Acceptance and Facial Behavior in Children with Oral Clefts." Cleft Palate-Craniofacial Journal 43, no. 2 (March 2006): 226–36. http://dx.doi.org/10.1597/05-018.1.

Full text
Abstract:
Objective To examine and compare social acceptance, social behavior, and facial movements of children with and without oral clefts in an experimental setting. Design Two groups of children (with and without oral clefts) were videotaped in a structured social interaction with a peer confederate, when listening to emotional stories, and when told to pose specific facial expressions. Participants Twenty-four children and adolescents ages 7 to 161/2 years with oral clefts were group matched for gender, grade, and socioeconomic status with 25 noncleft controls. Main Outcome Measures Specific social and facial behaviors coded from videotapes; Harter Self-Perception Profile, Social Acceptance subscale. Results Significant between-group differences were obtained. Children in the cleft group more often displayed “Tongue Out,” “Eye Contact,” “Mimicry,” and “Initiates Conversation.” For the cleft group, “Gaze Avoidance” was significantly negatively correlated with social acceptance scores. The groups were comparable in their ability to pose and spontaneously express facial emotion. Conclusions When comparing children with and without oral clefts in an experimental setting, with a relatively small sample size, behavior analysis identified some significant differences in patterns of social behavior but not in the ability to express facial emotion. Results suggest that many children with oral clefts may have relatively typical social development. However, for those who do have social competence deficits, systematic behavioral observation of atypical social responses may help individualize social skills interventions.
APA, Harvard, Vancouver, ISO, and other styles
40

Bhat, Anjana N., Sudha M. Srinivasan, Colleen Woxholdt, and Aaron Shield. "Differences in praxis performance and receptive language during fingerspelling between deaf children with and without autism spectrum disorder." Autism 22, no. 3 (December 20, 2016): 271–82. http://dx.doi.org/10.1177/1362361316672179.

Full text
Abstract:
Children with autism spectrum disorder present with a variety of social communication deficits such as atypicalities in social gaze and verbal and non-verbal communication delays as well as perceptuo-motor deficits like motor incoordination and dyspraxia. In this study, we had the unique opportunity to study praxis performance in deaf children with and without autism spectrum disorder in a fingerspelling context using American Sign Language. A total of 11 deaf children with autism spectrum disorder and 11 typically developing deaf children aged between 5 and 14 years completed a fingerspelling task. Children were asked to fingerspell 15 different words shown on an iPad. We coded various praxis errors and fingerspelling time. The deaf children with autism spectrum disorder had greater errors in pace, sequence precision, accuracy, and body part use and also took longer to fingerspell each word. Additionally, the deaf children with autism spectrum disorder had poor receptive language skills and this strongly correlated with their praxis performance and autism severity. These findings extend the evidence for dyspraxia in hearing children with autism spectrum disorder to deaf children with autism spectrum disorder. Poor sign language production in children with autism spectrum disorder may contribute to their poor gestural learning/comprehension and vice versa. Our findings have therapeutic implications for children with autism spectrum disorder when teaching sign language.
APA, Harvard, Vancouver, ISO, and other styles
41

Behrmann, Marlene, Thea Ghiselli-Crippa, John A. Sweeney, Ilaria Di Matteo, and Robert Kass. "Mechanisms Underlying Spatial Representation Revealed through Studies of Hemispatial Neglect." Journal of Cognitive Neuroscience 14, no. 2 (February 1, 2002): 272–90. http://dx.doi.org/10.1162/089892902317236894.

Full text
Abstract:
The representations that mediate the coding of spatial position were examined by comparing the behavior of patients with left hemispatial neglect with that of nonneurological control subjects. To determine the spatial coordinate system(s) used to define “left” and “right,” eye movements were measured for targets that appeared at 58,108, and 158 to the relative left or right defined with respect to the midline of the eyes, head, or midsaggital plane of the trunk. In the baseline condition, in which the various egocentric midlines were all aligned with the environmental midline, patients were disproportionately slower at initiating saccades to left than right targets, relative to the controls. When either the trunk or the head was rotated and the midline aligned with the most peripheral position while the eyes remained aligned with the midline of the environment, the results did not differ from the baseline condition. However, when the eyes were rotated and the midline aligned with the peripheral position, saccadic reaction time (SRT) differed significantly from the baseline, especially when the eyes were rotated to the right. These findings suggest that target position is coded relative to the current position of gaze (oculocentrically) and that this eye-centered coding is modulated by orbital position (eye-in-head signal). The findings dovetail well with results from existing neurophysiological studies and shed further light on the spatial representations mediated by the human parietal cortex.
APA, Harvard, Vancouver, ISO, and other styles
42

Boussaoud, Driss, Christophe Jouffrais, and Frank Bremmer. "Eye Position Effects on the Neuronal Activity of Dorsal Premotor Cortex in the Macaque Monkey." Journal of Neurophysiology 80, no. 3 (September 1, 1998): 1132–50. http://dx.doi.org/10.1152/jn.1998.80.3.1132.

Full text
Abstract:
Boussaoud, Driss, Christophe Jouffrais, and Frank Bremmer. Eye position effects on the neuronal activity of dorsal premotor cortex in the macaque monkey. J. Neurophysiol. 80: 1132–1150, 1998. Visual inputs to the brain are mapped in a retinocentric reference frame, but the motor system plans movements in a body-centered frame. This basic observation implies that the brain must transform target coordinates from one reference frame to another. Physiological studies revealed that the posterior parietal cortex may contribute a large part of such a transformation, but the question remains as to whether the premotor areas receive visual information, from the parietal cortex, readily coded in body-centered coordinates. To answer this question, we studied dorsal premotor cortex (PMd) neurons in two monkeys while they performed a conditional visuomotor task and maintained fixation at different gaze angles. Visual stimuli were presented on a video monitor, and the monkeys made limb movements on a panel of three touch pads located at the bottom of the monitor. A trial begins when the monkey puts its hand on the central pad. Then, later in the trial, a colored cue instructed a limb movement to the left touch pad if red or to the right one if green. The cues lasted for a variable delay, the instructed delay period, and their offset served as the go signal. The fixation spot was presented at the center of the screen or at one of four peripheral locations. Because the monkey's head was restrained, peripheral fixations caused a deviation of the eyes within the orbit, but for each fixation angle, the instructional cue was presented at nine locations with constant retinocentric coordinates. After the presentation of the instructional cue, 133 PMd cells displayed a phasic discharge (signal-related activity), 157 were tonically active during the instructed delay period (set-related or preparatory activity), and 104 were active after the go signal in relation to movement (movement-related activity). A large proportion of cells showed variations of the discharge rate in relation to limb movement direction, but only modest proportions were sensitive to the cue's location (signal, 43%; set, 34%; movement, 29%). More importantly, the activity of most neurons (signal, 74%; set, 79%; movement, 79%) varied significantly (analysis of variance, P < 0.05) with orbital eye position. A regression analysis showed that the neuronal activity varied linearly with eye position along the horizontal and vertical axes and can be approximated by a two-dimensional regression plane. These data provide evidence that eye position signals modulate the neuronal activity beyond sensory areas, including those involved in visually guided reaching limb movements. Further, they show that neuronal activity related to movement preparation and execution combines at least two directional parameters: arm movement direction and gaze direction in space. It is suggested that a substantial population of PMd cells codes limb movement direction in a head-centered reference frame.
APA, Harvard, Vancouver, ISO, and other styles
43

De Froy, Adrienne M., Megan E. Sims, Benjamin M. Sloan, Sebastian A. Gajardo, and Pamela Rosenthal Rollins. "Differential responses to child communicative behavior of parents of toddlers with ASD." Autism & Developmental Language Impairments 6 (January 2021): 239694152098489. http://dx.doi.org/10.1177/2396941520984892.

Full text
Abstract:
Background and aims The quality of parent verbal input—diverse vocabulary that is well-matched to the child’s developmental level within interactions that are responsive to their interests—has been found to positively impact child language skills. For typically developing (TD) children, there is evidence that more advanced linguistic and social development differentially elicits higher quality parent input, suggesting a bidirectional relationship between parent and child. The purpose of this study was to evaluate if toddlers with ASD also differentially elicit parental verbal input by (1) analyzing the quality of parent input to the communicative behavior of their toddlers with ASD, (2) examining if parents respond differentially to more advanced toddler communicative behavior, as measured by the coordination of multiple communicative behaviors, and (3) exploring the relationship between parental responsiveness to child communicative behaviors and change in child communication and social skills. Methods Participants were 77 toddlers with ASD age 18-39 months and a parent who participated in a larger RCT. Ten-minute parent–toddler interactions were recorded prior to a 12-week intervention. Parent response to child communicative behaviors was coded following each child communicative behavior as no acknowledgment, responsive, directive, or nonverbal acknowledgment. Parent number of different words and difference between parent and child MLU in words were calculated separately for responsive and directive parent utterances. Child growth in language and social skills was measured using the Vineland II Communication and Socialization domain scores, respectively. Results (1) Parents were largely responsive to their toddler’s communication. When being responsive (as opposed to directive), parents used a greater number of different words within utterances that were well-matched to child language; (2) when toddlers coordinated communicative behaviors (versus producing an isolated communicative behavior), parents were more likely to respond and their replies were more likely to be responsive; and (3) parent responsiveness to child coordinated communication was significantly correlated with change in Vineland II Socialization but not Communication. A unique role of gaze coordinated child communication in eliciting responsive parental behaviors and improving growth in child social skills emerged. Conclusions Our results support a bidirectional process between responsive parent verbal input and the social development of toddlers with ASD, with less sophisticated child communicative behaviors eliciting lower quality parent input. Implications: Our findings highlight the critical role of early parent-mediated intervention for children with ASD generally, and to enhance eye gaze through parent responsivity more specifically.
APA, Harvard, Vancouver, ISO, and other styles
44

Kobayashi, Shunsuke, Johan Lauwereyns, Masashi Koizumi, Masamichi Sakagami, and Okihide Hikosaka. "Influence of Reward Expectation on Visuospatial Processing in Macaque Lateral Prefrontal Cortex." Journal of Neurophysiology 87, no. 3 (March 1, 2002): 1488–98. http://dx.doi.org/10.1152/jn.00472.2001.

Full text
Abstract:
The lateral prefrontal cortex (LPFC) has been implicated in visuospatial processing, especially when it is required to hold spatial information during a delay period. It has also been reported that the LPFC receives information about expected reward outcome. However, the interaction between visuospatial processing and reward processing is still unclear because the two types of processing could not be dissociated in conventional delayed response tasks. To examine this, we used a memory-guided saccade task with an asymmetric reward schedule and recorded 228 LPFC neurons. The position of the target cue indicated the spatial location for the following saccade and the color of the target cue indicated the reward outcome for a correct saccade. Activity of LPFC was classified into three main types: S-type activity carried only spatial signals, R-type activity carried only reward signals, and SR-type activity carried both. Therefore only SR-type cells were potentially involved in both visuospatial processing and reward processing. SR-type activity was enhanced (SR+) or depressed (SR−) by the reward expectation. The spatial discriminability as expressed by the transmitted information was improved by reward expectation in SR+ type. In contrast, when reward information was coded by an increase of activity in the reward-absent condition (SR− type), it did not improve the spatial representation. This activity appeared to be involved in gaze fixation. These results extend previous findings suggesting that the LPFC exerts dual influences based on predicted reward outcome: improvement of memory-guided saccades (when reward is expected) and suppression of inappropriate behavior (when reward is not expected).
APA, Harvard, Vancouver, ISO, and other styles
45

Beebe, Beatrice, and Frank Lachmann. "Maternal Self-Critical and Dependent Personality Styles and Mother-Infant Communication." Journal of the American Psychoanalytic Association 65, no. 3 (May 23, 2017): 491–508. http://dx.doi.org/10.1177/0003065117709004.

Full text
Abstract:
This study investigated mother-infant communication in relation to Blatt’s measures of adult personality organization, namely, interpersonal relatedness and self-definition, defining the higher ends of these two measures as dependency and self-criticism, respectively. A nonclinical sample of 126 mother-infant dyads provided the data. An evaluation of maternal self-criticism and dependency was made six weeks postpartum; four months postpartum, mother-infant self- and interactive contingencies during face-to-face play were studied and analyzed in conjunction with the earlier evaluation. Self- and interactive contingencies were defined by the predictability within, and between, the behaviors of each partner. This approach assesses the process of relating from moment to moment within a dyad. Self-contingency measures the degree of stability/variability of one person’s ongoing rhythms of behavior; interactive contingency measures the likelihood that one person’s behavior is influenced by the behavior of the partner. Infant and mother facial affect, gaze, and touch, and infant vocal affect, were coded second by second from split-screen videotape. Maternal self-criticism and dependency had strikingly different effects on mother-infant communication. Self-critical mothers showed lowered attention and emotion coordination, staying more “separate” from infants in these realms, compromising infant interactive efficacy. This finding is consistent with Blatt and colleagues’ descriptions of self-critical individuals as preoccupied with self-definition, compromising relatedness. Dependent mothers and their infants showed reciprocal emotional vigilance, consistent with Blatt and colleague’s description of dependent individuals as “empty” and “needy” of emotional supplies from their partner. The study documents that the influence of the mother’s personality organization operates through both infant and maternal contributions, a co-created process rather than a direct unilateral transmission from mother to infant.
APA, Harvard, Vancouver, ISO, and other styles
46

Marino, Robert A., Thomas P. Trappenberg, Michael Dorris, and Douglas P. Munoz. "Spatial Interactions in the Superior Colliculus Predict Saccade Behavior in a Neural Field Model." Journal of Cognitive Neuroscience 24, no. 2 (February 2012): 315–36. http://dx.doi.org/10.1162/jocn_a_00139.

Full text
Abstract:
During natural vision, eye movements are dynamically controlled by the combinations of goal-related top–down (TD) and stimulus-related bottom–up (BU) neural signals that map onto objects or locations of interest in the visual world. In primates, both BU and TD signals converge in many areas of the brain, including the intermediate layers of the superior colliculus (SCi), a midbrain structure that contains a retinotopically coded map for saccades. How TD and BU signals combine or interact within the SCi map to influence saccades remains poorly understood and actively debated. It has been proposed that winner-take-all competition between these signals occurs dynamically within this map to determine the next location for gaze. Here, we examine how TD and BU signals interact spatially within an artificial two-dimensional dynamic winner-take-all neural field model of the SCi to influence saccadic RT (SRT). We measured point images (spatially organized population activity on the SC map) physiologically to inform the TD and BU model parameters. In this model, TD and BU signals interacted nonlinearly within the SCi map to influence SRT via changes to the (1) spatial size or extent of individual signals, (2) peak magnitude of individual signals, (3) total number of competing signals, and (4) the total spatial separation between signals in the visual field. This model reproduced previous behavioral studies of TD and BU influences on SRT and accounted for multiple inconsistencies between them. This is achieved by demonstrating how, under different experimental conditions, the spatial interactions of TD and BU signals can lead to either increases or decreases in SRT. Our results suggest that dynamic winner-take-all modeling with local excitation and distal inhibition in two dimensions accurately reflects both the physiological activity within the SCi map and the behavioral changes in SRT that result from BU and TD manipulations.
APA, Harvard, Vancouver, ISO, and other styles
47

Moritaka, Kiyoshi, and Tomonori Kawano. "Use of Colored Reflectors for Negation or Highlighting of Scanned Color Information on Film-Based CIELAB-Coded Optical Logic Gate Models." Journal of Advanced Computational Intelligence and Intelligent Informatics 17, no. 6 (November 20, 2013): 799–804. http://dx.doi.org/10.20965/jaciii.2013.p0799.

Full text
Abstract:
In the last two decades, a number of researchers have been engaged in the study of natural computing systems that employ physical, chemical, and biological properties as direct media for manifesting computations. Among such attempts, studies focusing on the use of lights as key computation components in particular have attracted the attention of researchers and engineers, since these studies are potentially applicable to signal processing through optical interconnections between electronic devices. Our research team has recently been engaged in the study of a novel color-based natural computing model. Our recent works included using CIELAB-coded colors on printed-paper to compute Boolean conjunctions (AND operations). In this study, we performed Boolean operations based on CIELAB-coded colors by placing color-printed films over aluminum-coated reflectors with and/or without color. The results of the operations were gathered by testing the color codes printed on the films for negation or highlighting. This type of CIELAB-based color computing has a wide range of potential applications, such as a method for security or access control to secured systems. Such applications could match paired color keys on which the arrays of color codes could be printed and optically computed.
APA, Harvard, Vancouver, ISO, and other styles
48

Duan, Jingyi, and Nikhilesh Dholakia. "The reshaping of Chinese consumer values in the social media era." Qualitative Market Research: An International Journal 18, no. 4 (September 14, 2015): 409–26. http://dx.doi.org/10.1108/qmr-07-2014-0058.

Full text
Abstract:
Purpose – The purpose of this paper is to investigate how, in China, postings on social media site Weibo reflect as well as accelerate the reshaping of traditional values. As Chinese social media extend their reach outside China, the displays of visible desire, hedonism and materialism could influence global consumption ethos. Design/methodology/approach – Using interpretive content analysis, over 250 Weibo postings of 8 selected Weibo users, from the network of one of the authors, were identified, coded and interpreted. The users were selected based on their frequency, variety and expressiveness of postings. Findings – Weibo is playing a critical role in transforming Chinese consumer values. Via Weibo, personal consumption experiences are available for public gaze. Consequently, desire for powerfully signified objects and experiences is more visible; “enjoy now” is turning out to be an appreciated life attitude, and materialism and hedonism are growing irresistibly. As a result, the traditional Chinese consumer values – suppressing desire, delaying gratification and thriftiness – are losing ground in Chinese society. Also, as Weibo makes the influence of the elite as well as electronic word-of-mouth very powerful, the values of the elite and grassroots groups are actually converging instead of being separated by substantial chasms that have existed historically. Practical implications – Sina Weibo had a US initial public offering (IPO) of its stock in April, 2014, and many other China-based Internet firms were getting set for US IPOs. This paper provides unique insights for Chinese social media companies’ potential global impact. Future social media contexts would be shaped by collision as well as convergence of Asia-centric and USA-centric platforms. This paper lays the groundwork for studying such interactions. Originality/value – In-depth interpretations of Weibo postings contribute to our understanding of how social media impact Chinese society now and would potentially affect global societies later. This is a pioneering study on the massive influences of social media on the macro-level consumer behavior.
APA, Harvard, Vancouver, ISO, and other styles
49

Xiao, Hailin, Ju Ni, Wu Xie, and Shan Ouyang. "A construction of quantum turbo product codes based on CSS-type quantum convolutional codes." International Journal of Quantum Information 15, no. 01 (February 2017): 1750003. http://dx.doi.org/10.1142/s0219749917500034.

Full text
Abstract:
As in classical coding theory, turbo product codes (TPCs) through serially concatenated block codes can achieve approximatively Shannon capacity limit and have low decoding complexity. However, special requirements in the quantum setting severely limit the structures of turbo product codes (QTPCs). To design a good structure for QTPCs, we present a new construction of QTPCs with the interleaved serial concatenation of [Formula: see text]-type quantum convolutional codes (QCCs). First, [Formula: see text]-type QCCs are proposed by exploiting the theory of CSS-type quantum stabilizer codes and QCCs, and the description and the analysis of encoder circuit are greatly simplified in the form of Hadamard gates and C-NOT gates. Second, the interleaved coded matrix of QTPCs is derived by quantum permutation SWAP gate definition. Finally, we prove the corresponding relation on the minimum Hamming distance of QTPCs associated with classical TPCs, and describe the state diagram of encoder and decoder of QTPCs that have a highly regular structure and simple design idea.
APA, Harvard, Vancouver, ISO, and other styles
50

Cichoń, Sławomir, and Marek Gorgoń. "FPGA Implementation of H.264 CAVLC Decoder Using High-Level Synthesis." Image Processing & Communications 22, no. 2 (June 1, 2017): 21–26. http://dx.doi.org/10.1515/ipc-2017-0009.

Full text
Abstract:
Abstract CONTEXT ADAPTIVE VARIABLE LENGTH CODING (CAVLC) is a method designed for coding residual pixel data after transform and quantization, in which different codes with variable length are chosen based on recently coded coefficients. Coded bitstream can be stored or transmitted. This method is optional in widely adopted H.264 video coding standard. The entire algorithm is a complex one, and also difficult to implement efficiently in Field-Programmable Gate Array (FPGA), due to data dependency. When the complexity of the Register Transfer Logic (RTL) implementation rises, it impacts the duration and costs of development. Therefore, usage of High Level Synthesis (HLS) may be beneficial with these types of projects. In this paper first known to authors implementation of CAVLC and Exp-Golomb decoders for H.264 intra decoder in Impulse C language will be presented and compared with other implementations. Proposed solution is able to decode more then 720p@40fps with FPGA module clock at 166MHz.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography