Journal articles on the topic 'Multiple cue integration'

To see the other types of publications on this topic, follow the link: Multiple cue integration.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Multiple cue integration.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Moreno-Noguer, Francesc, Alberto Sanfeliu, and Dimitris Samaras. "Dependent Multiple Cue Integration for Robust Tracking." IEEE Transactions on Pattern Analysis and Machine Intelligence 30, no. 4 (April 2008): 670–85. http://dx.doi.org/10.1109/tpami.2007.70727.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Guo, Li-Jun, Ting-Ting Cheng, Bo Xiao, Rong Zhang, and Jie-Yu Zhao. "Video human segmentation based on multiple-cue integration." Signal Processing: Image Communication 30 (January 2015): 166–77. http://dx.doi.org/10.1016/j.image.2014.10.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Undorf, Monika, and Arndt Bröder. "Cue integration in metamemory judgements is strategic." Quarterly Journal of Experimental Psychology 73, no. 4 (October 24, 2019): 629–42. http://dx.doi.org/10.1177/1747021819882308.

Full text
Abstract:
People base judgements about their own memory processes on probabilistic cues such as the characteristics of study materials and study conditions. While research has largely focused on how single cues affect metamemory judgements, a recent study by Undorf, Söllner, and Bröder found that multiple cues affected people’s predictions of their future memory performance (judgements of learning, JOLs). The present research tested whether this finding was indeed due to strategic integration of multiple cues in JOLs or, alternatively, resulted from people’s reliance on a single unified feeling of ease. In Experiments 1 and 2, we simultaneously varied concreteness and emotionality of word pairs and solicited (a) pre-study JOLs that could be based only on the manipulated cues and (b) immediate JOLs that could be based both on the manipulated cues and on a feeling of ease. The results revealed similar amounts of cue integration in pre-study JOLs and immediate JOLs, regardless of whether cues varied in two easily distinguishable levels (Experiment 1) or on a continuum (Experiment 2). This suggested that people strategically integrated multiple cues in their immediate JOLs. Experiment 3 provided further evidence for this conclusion by showing that false explicit information about cue values affected immediate JOLs over and above actual cue values. Hence, we conclude that cue integration in JOLs involves strategic processes.
APA, Harvard, Vancouver, ISO, and other styles
4

Legge, Eric L. G., Christopher R. Madan, Marcia L. Spetch, and Elliot A. Ludvig. "Multiple cue use and integration in pigeons (Columba livia)." Animal Cognition 19, no. 3 (February 23, 2016): 581–91. http://dx.doi.org/10.1007/s10071-016-0963-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Tang, Xiangyu, and Christoph von der Malsburg. "Figure-Ground Separation by Cue Integration." Neural Computation 20, no. 6 (June 2008): 1452–72. http://dx.doi.org/10.1162/neco.2008.03-06-176.

Full text
Abstract:
This letter presents an improved cue integration approach to reliably separate coherent moving objects from their background scene in video sequences. The proposed method uses a probabilistic framework to unify bottom-up and top-down cues in a parallel, “democratic” fashion. The algorithm makes use of a modified Bayes rule where each pixel's posterior probabilities of figure or ground layer assignment are derived from likelihood models of three bottom-up cues and a prior model provided by a top-down cue. Each cue is treated as independent evidence for figure-ground separation. They compete with and complement each other dynamically by adjusting relative weights from frame to frame according to cue quality measured against the overall integration. At the same time, the likelihood or prior models of individual cues adapt toward the integrated result. These mechanisms enable the system to organize under the influence of visual scene structure without manual intervention. A novel contribution here is the incorporation of a top-down cue. It improves the system's robustness and accuracy and helps handle difficult and ambiguous situations, such as abrupt lighting changes or occlusion among multiple objects. Results on various video sequences are demonstrated and discussed. (Video demos are available at http://organic.usc.edu:8376/∼tangx/neco/index.html .)
APA, Harvard, Vancouver, ISO, and other styles
6

van den Bos, Esther, Morten H. Christiansen, and Jennifer B. Misyak. "Statistical learning of probabilistic nonadjacent dependencies by multiple-cue integration." Journal of Memory and Language 67, no. 4 (November 2012): 507–20. http://dx.doi.org/10.1016/j.jml.2012.07.008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Juslin, Peter, Linnea Karlsson, and Henrik Olsson. "Information integration in multiple cue judgment: A division of labor hypothesis." Cognition 106, no. 1 (January 2008): 259–98. http://dx.doi.org/10.1016/j.cognition.2007.02.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Tomou, George, Xiaogang Yan, and J. Crawford. "Transsacadic Integration of Multiple Objects and The Influence of Stable Allocentric Cue." Journal of Vision 18, no. 10 (September 1, 2018): 1290. http://dx.doi.org/10.1167/18.10.1290.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Leichter, Ido, Michael Lindenbaum, and Ehud Rivlin. "The Cues in "Dependent Multiple Cue Integration for Robust Tracking" Are Independent." IEEE Transactions on Pattern Analysis and Machine Intelligence 36, no. 3 (March 2014): 620–21. http://dx.doi.org/10.1109/tpami.2010.170.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Bankieris, Kaitlyn R., Vikranth Rao Bejjanki, and Richard N. Aslin. "Cue Integration for Continuous and Categorical Dimensions by Synesthetes." Multisensory Research 30, no. 3-5 (2017): 207–34. http://dx.doi.org/10.1163/22134808-00002559.

Full text
Abstract:
For synesthetes, sensory or cognitive stimuli induce the perception of an additional sensory or cognitive stimulus. Grapheme–color synesthetes, for instance, consciously and consistently experience particular colors (e.g., fluorescent pink) when perceiving letters (e.g.,u). As a phenomenon involving multiple stimuli within or across modalities, researchers have posited that synesthetes may integrate sensory cues differently than non-synesthetes. However, findings to date present mixed results concerning this hypothesis, with researchers reporting enhanced, depressed, or normal sensory integration for synesthetes. In this study wequantitativelyevaluated the multisensory integration process of synesthetes and non-synesthetes using Bayesian principles, rather than employing multisensory illusions, to make inferences about the sensory integration process. In two studies we investigated synesthetes’ sensory integration by comparing human behavior to that of an ideal observer. We found that synesthetes integrated cues for both continuous and categorical dimensions in a statistically optimal manner, matching the sensory integration behavior of controls. These findings suggest that synesthetes and controls utilize similar cue integration mechanisms, despite differences in how they perceive unimodal stimuli.
APA, Harvard, Vancouver, ISO, and other styles
11

Tyler, Christopher W. "An Accelerated Cue Combination Principle Accounts for Multi-cue Depth Perception." Journal of Perceptual Imaging 3, no. 1 (January 1, 2020): 10501–1. http://dx.doi.org/10.2352/j.percept.imaging.2020.3.1.010501.

Full text
Abstract:
Abstract For the visual world in which we operate, the core issue is to conceptualize how its three-dimensional structure is encoded through the neural computation of multiple depth cues and their integration to a unitary depth structure. One approach to this issue is the full Bayesian model of scene understanding, but this is shown to require selection from the implausibly large number of possible scenes. An alternative approach is to propagate the implied depth structure solution for the scene through the “belief propagation” algorithm on general probability distributions. However, a more efficient model of local slant propagation is developed as an alternative.The overall depth percept must be derived from the combination of all available depth cues, but a simple linear summation rule across, say, a dozen different depth cues, would massively overestimate the perceived depth in the scene in cases where each cue alone provides a close-to-veridical depth estimate. On the other hand, a Bayesian averaging or “modified weak fusion” model for depth cue combination does not provide for the observed enhancement of perceived depth from weak depth cues. Thus, the current models do not account for the empirical properties of perceived depth from multiple depth cues.The present analysis shows that these problems can be addressed by an asymptotic, or hyperbolic Minkowski, approach to cue combination. With appropriate parameters, this first-order rule gives strong summation for a few depth cues, but the effect of an increasing number of cues beyond that remains too weak to account for the available degree of perceived depth magnitude. Finally, an accelerated asymptotic rule is proposed to match the empirical strength of perceived depth as measured, with appropriate behavior for any number of depth cues.
APA, Harvard, Vancouver, ISO, and other styles
12

Walzer, Andreas, and Peter Schausberger. "Integration of multiple cues allows threat-sensitive anti-intraguild predator responses in predatory mites." Behaviour 150, no. 2 (2013): 115–32. http://dx.doi.org/10.1163/1568539x-00003040.

Full text
Abstract:
Intraguild (IG) prey is commonly confronted with multiple IG predator species. However, the IG predation (IGP) risk for prey is not only dependent on the predator species, but also on inherent (intraspecific) characteristics of a given IG predator such as its life-stage, sex or gravidity and the associated prey needs. Thus, IG prey should have evolved the ability to integrate multiple IG predator cues, which should allow both inter- and intraspecific threat-sensitive anti-predator responses. Using a guild of plant-inhabiting predatory mites sharing spider mites as prey, we evaluated the effects of single and combined cues (eggs and/or chemical traces left by a predator female on the substrate) of the low risk IG predator Neoseiulus californicus and the high risk IG predator Amblyseius andersoni on time, distance and path shape parameters of the larval IG prey Phytoseiulus persimilis. IG prey discriminated between traces of the low and high risk IG predator, with and without additional presence of their eggs, indicating interspecific threat-sensitivity. The behavioural changes were manifest in distance moved, activity and path shape of IG prey. The cue combination of traces and eggs of the IG predators conveyed other information than each cue alone, allowing intraspecific threat-sensitive responses by IG prey apparent in changed velocities and distances moved. We argue that graded responses to single and combined IG predator cues are adaptive due to minimization of acceptance errors in IG prey decision making.
APA, Harvard, Vancouver, ISO, and other styles
13

Menzel, R., K. Geiger, L. Chittka, J. Joerges, J. Kunze, and U. Müller. "The knowledge base of bee navigation." Journal of Experimental Biology 199, no. 1 (January 1, 1996): 141–46. http://dx.doi.org/10.1242/jeb.199.1.141.

Full text
Abstract:
Navigation in honeybees is discussed against the background of the types of memories employed in the navigational task. Two questions are addressed. Do bees have goal-specific expectations, and when are novel routes travelled? Expectations are deduced from (1) context stimuli as determinants for local cue memories, (2) landmark-dependent path integration, (3) sequential learning of landmarks, and (4) motivation- and context-dependent memory retrieval. Novel routes are travelled under two conditions: (1) goal-cue-based piloting and (2) integration of simultaneously activated vector memories. Our data do not support the conclusion that memory integration in bees is organised by a cognitive map. The assumption of purely separate memories that are only retrieved according to the chain of events during navigational performance also appears to be inadequate. We favour the view that multiple memories are integrated using external and internal sources of information. Such configural memories lead to both specific expectations and novel routes.
APA, Harvard, Vancouver, ISO, and other styles
14

Farmer, Thomas A., Meredith Brown, and Michael K. Tanenhaus. "Prediction, explanation, and the role of generative models in language processing." Behavioral and Brain Sciences 36, no. 3 (May 10, 2013): 211–12. http://dx.doi.org/10.1017/s0140525x12002312.

Full text
Abstract:
AbstractWe propose, following Clark, that generative models also play a central role in the perception and interpretation of linguistic signals. The data explanation approach provides a rationale for the role of prediction in language processing and unifies a number of phenomena, including multiple-cue integration, adaptation effects, and cortical responses to violations of linguistic expectations.
APA, Harvard, Vancouver, ISO, and other styles
15

Vinberg, Joakim, and Kalanit Grill-Spector. "Representation of Shapes, Edges, and Surfaces Across Multiple Cues in the Human Visual Cortex." Journal of Neurophysiology 99, no. 3 (March 2008): 1380–93. http://dx.doi.org/10.1152/jn.01223.2007.

Full text
Abstract:
The lateral occipital complex (LOC) responds preferentially to objects compared with random stimuli or textures independent of the visual cue. However, it is unknown whether the LOC (or other cortical regions) are involved in the processing of edges or global surfaces without shape information. Here, we examined processing of 1) global shape, 2) disconnected edges without a global shape, and 3) global surfaces without edges versus random stimuli across motion and stereo cues. The LOC responded more strongly to global shapes than to edges, surfaces, or random stimuli, for both motion and stereo cues. However, its responses to local edges or global surfaces were not different from random stimuli. This suggests that the LOC processes shapes, not edges or surfaces. LOC also responded more strongly to objects than to holes with the same shape, suggesting sensitivity to border ownership. V7 responded more strongly to edges than to surfaces or random stimuli for both motion and stereo cues, whereas V3a and V4 preferred motion edges. Finally, a region in the caudal intraparietal sulcus (cIPS) responded more strongly to both stereo versus motion and to stereo surfaces versus random stereo (but not to motion surfaces vs. random motion). Thus we found evidence for cue-specific responses to surfaces in the cIPS, both cue-specific and cue-independent responses to edges in intermediate visual areas, and shape-selective responses across multiple cues in the LOC. Overall, these data suggest that integration of visual information across multiple cues is mainly achieved at the level of shape and underscore LOC's role in shape computations.
APA, Harvard, Vancouver, ISO, and other styles
16

Pazuchanics, Skye Lee, and Douglas J. Gillan. "Displaying Depth in Computer Systems: Lessons from Two-Dimensional Works of Art." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 49, no. 17 (September 2005): 1583–87. http://dx.doi.org/10.1177/154193120504901718.

Full text
Abstract:
Virtual depth displays depend on static, monocular cues. Models of integrating monocular cues may be continuous (additive) or discontinuous. Previous research using simple displays and a small number of cues supported continuous cue integration. The present research is designed to expand the understanding of how the visual system integrates information from multiple pictorial cues by investigating combinations of one to ten pictorial cues in visually-rich, two-dimensional displays (paintings and photographs). Participants estimated depth in target paintings and photographs relative to a standard two dimensional display. Certain results suggest that the visual system integrates cues in a largely additive way, but after a number of cues are present there may be an additional boost in perceived depth resulting in a best-fittingdiscontinuous model of cue combination. However, this discontinuous effect may be due to designdecisions made by the painters rather than exclusively to the perceptual processes of the viewers. Analyses of these design decisions provide lessons for the design of two-dimensional displays.
APA, Harvard, Vancouver, ISO, and other styles
17

Yow, W. Quin, Jia Wen Lee, and Xiaoqian Li. "Age-Related Decline in Pragmatic Reasoning of Older Adults." Innovation in Aging 5, Supplement_1 (December 1, 2021): 698–99. http://dx.doi.org/10.1093/geroni/igab046.2619.

Full text
Abstract:
Abstract As speech is often ambiguous, pragmatic reasoning—the process of integrating multiple sources of information including semantics, ostensive cues and contextual information (Bohn & Frank, 2019)—is essential to understanding a speaker’s intentions. Despite current literature suggesting that certain social cognitive processes such as gaze-processing (Slessor et al., 2014) appear to be impaired in late adulthood, it is not well understood if pragmatic reasoning decline with age. Here, we examined young adults’ (aged 19-25; n=41) and older adults’ (aged 60-79; n=41) ability to engage in pragmatic reasoning in a cue integration task. In Experiment 1, participants had to integrate contextual (participants and speaker knew there were two novel objects but the latter could only see one), semantic (“There’s the [novel-label]” or “Where’s the [novel-label]”), and gaze (speaker looked at the mutually-visible object) cues to identify the referent (Nurmsoo & Bloom, 2008). In Experiment 2, participants received contextual and semantic cues less gaze cue. In both experiments, the target referent object for “There” and “Where” trials was the mutually-visible object and the object the speaker could not see respectively. Overall, young adults outperformed older adults, even in the simpler two-cue Experiment 2 (ps<.006). While older adults were significantly above chance in “There” trials for both experiments as well as “Where” trials in Experiment 2 (ps<.05), they had specific difficulty in integrating three cues in “Where” trials, where a more sophisticated interpretation of the multiple cues was required (p=.42). Our findings provide important insights into an age-related decline of pragmatic reasoning in older adults.
APA, Harvard, Vancouver, ISO, and other styles
18

Prsa, Mario, Steven Gale, and Olaf Blanke. "Self-motion leads to mandatory cue fusion across sensory modalities." Journal of Neurophysiology 108, no. 8 (October 15, 2012): 2282–91. http://dx.doi.org/10.1152/jn.00439.2012.

Full text
Abstract:
When perceiving properties of the world, we effortlessly combine multiple sensory cues into optimal estimates. Estimates derived from the individual cues are generally retained once the multisensory estimate is produced and discarded only if the cues stem from the same sensory modality (i.e., mandatory fusion). Does multisensory integration differ in that respect when the object of perception is one's own body, rather than an external variable? We quantified how humans combine visual and vestibular information for perceiving own-body rotations and specifically tested whether such idiothetic cues are subjected to mandatory fusion. Participants made extensive size comparisons between successive whole body rotations using only visual, only vestibular, and both senses together. Probabilistic descriptions of the subjects' perceptual estimates were compared with a Bayes-optimal integration model. Similarity between model predictions and experimental data echoed a statistically optimal mechanism of multisensory integration. Most importantly, size discrimination data for rotations composed of both stimuli was best accounted for by a model in which only the bimodal estimator is accessible for perceptual judgments as opposed to an independent or additive use of all three estimators (visual, vestibular, and bimodal). Indeed, subjects' thresholds for detecting two multisensory rotations as different from one another were, in pertinent cases, larger than those measured using either single-cue estimate alone. Rotations different in terms of the individual visual and vestibular inputs but quasi-identical in terms of the integrated bimodal estimate became perceptual metamers. This reveals an exceptional case of mandatory fusion of cues stemming from two different sensory modalities.
APA, Harvard, Vancouver, ISO, and other styles
19

McDonald, J. Scott, Colin W. G. Clifford, Selina S. Solomon, Spencer C. Chen, and Samuel G. Solomon. "Integration and segregation of multiple motion signals by neurons in area MT of primate." Journal of Neurophysiology 111, no. 2 (January 15, 2014): 369–78. http://dx.doi.org/10.1152/jn.00254.2013.

Full text
Abstract:
We used multielectrode arrays to measure the response of populations of neurons in primate middle temporal area to the transparent motion of two superimposed dot fields moving in different directions. The shape of the population response was well predicted by the sum of the responses to the constituent fields. However, the population response profile for transparent dot fields was similar to that for coherent plaid motion and hence an unreliable cue to transparency. We then used single-unit recording to characterize component and pattern cells from their response to drifting plaids. Unlike for plaids, component cells responded to the average direction of superimposed dot fields, whereas pattern cells could signal the constituent motions. This observation provides support for a strong prediction of the Simoncelli and Heeger (1998) model of motion analysis in area middle temporal, and suggests that pattern cells have a special status in the processing of superimposed dot fields.
APA, Harvard, Vancouver, ISO, and other styles
20

Billino, Jutta, and Knut Drewing. "Age Effects on Visuo-Haptic Length Discrimination: Evidence for Optimal Integration of Senses in Senior Adults." Multisensory Research 31, no. 3-4 (2018): 273–300. http://dx.doi.org/10.1163/22134808-00002601.

Full text
Abstract:
Demographic changes in most developed societies have fostered research on functional aging. While cognitive changes have been characterized elaborately, understanding of perceptual aging lacks behind. We investigated age effects on the mechanisms of how multiple sources of sensory information are merged into a common percept. We studied visuo-haptic integration in a length discrimination task. A total of 24 young (20–25 years) and 27 senior (69–77 years) adults compared standard stimuli to appropriate sets of comparison stimuli. Standard stimuli were explored under visual, haptic, or visuo-haptic conditions. The task procedure allowed introducing an intersensory conflict by anamorphic lenses. Comparison stimuli were exclusively explored haptically. We derived psychometric functions for each condition, determining points of subjective equality and discrimination thresholds. We notably evaluated visuo-haptic perception by different models of multisensory processing, i.e., the Maximum-Likelihood-Estimate model of optimal cue integration, a suboptimal integration model, and a cue switching model. Our results support robust visuo-haptic integration across the adult lifespan. We found suboptimal weighted averaging of sensory sources in young adults, however, senior adults exploited differential sensory reliabilities more efficiently to optimize thresholds. Indeed, evaluation of the MLE model indicates that young adults underweighted visual cues by more than 30%; in contrast, visual weights of senior adults deviated only by about 3% from predictions. We suggest that close to optimal multisensory integration might contribute to successful compensation for age-related sensory losses and provides a critical resource. Differentiation between multisensory integration during healthy aging and age-related pathological challenges on the sensory systems awaits further exploration.
APA, Harvard, Vancouver, ISO, and other styles
21

Yong, N. Au, G. D. Paige, and S. H. Seidman. "Multiple Sensory Cues Underlying the Perception of Translation and Path." Journal of Neurophysiology 97, no. 2 (February 2007): 1100–1113. http://dx.doi.org/10.1152/jn.00694.2006.

Full text
Abstract:
The translational linear vestibuloocular reflex compensates most accurately for high frequencies of head translation, with response magnitude decreasing with declining stimulus frequency. However, studies of the perception of translation typically report robust responses even at low frequencies or during prolonged motion. This inconsistency may reflect the incorporation of nondirectional sensory information associated with the vibration and noise that typically accompany translation, into motion perception. We investigated the perception of passive translation in humans while dissociating nondirectional cues from actual head motion. In a cue-dissociation experiment, interaural (IA) motion was generated using either a linear sled, the mechanics of which generated noise and vibration cues that were correlated with the motion profile, or a multiaxis technique that dissociated these cues from actual motion. In a trajectory-shift experiment, IA motion was interrupted by a sudden change in direction (±30° diagonal) that produced a change in linear acceleration while maintaining sled speed and therefore mechanical (nondirectional) cues. During multi-axis cue-dissociation trials, subjects reported erroneous translation perceptions that strongly reflected the pattern of nondirectional cues, as opposed to nearly veridical percepts when motion and nondirectional cues coincided. During trajectory-shift trials, subjects' percepts were initially accurate, but erroneous following the direction change. Results suggest that nondirectional cues strongly influence the perception of linear motion, while the utility of cues directly related to translational acceleration is limited. One key implication is that “path integration” likely involves complex mechanisms that depend on nondirectional and contextual self-motion cues in support of limited and transient otolith-dependent acceleration input.
APA, Harvard, Vancouver, ISO, and other styles
22

LI, YANLI, ZHONG ZHOU, and WEI WU. "AUTOMATIC PEDESTRIAN SEGMENTATION COMBINING SHAPE, PUZZLE AND APPEARANCE." International Journal on Artificial Intelligence Tools 22, no. 05 (October 2013): 1360004. http://dx.doi.org/10.1142/s021821301360004x.

Full text
Abstract:
In this paper, we address the problem of automatically segmenting non-rigid pedestrians in still images. Since this task is well known difficult for any type of model or cue alone, a novel approach utilizing shape, puzzle and appearance cues is presented. The major contribution of this approach lies in the combination of multiple cues to refine pedestrian segmentation successively, which has two characterizations: (1) a shape guided puzzle integration scheme, which extracts pedestrians via assembling puzzles with constraint of a shape template; (2) a pedestrian refinement scheme, which is fulfilled by optimizing an automatically generated trimap that encodes both human silhouette and skeleton. Qualitative and quantitative evaluations on several public datasets verify the approach's effectiveness to various articulated bodies, human appearance and partial occlusion, and that this approach is able to segment pedestrians more accurately than methods based only on appearance or shape cue.
APA, Harvard, Vancouver, ISO, and other styles
23

Spellman, Barbara A. "Acting as Intuitive Scientists: Contingency Judgments Are Made While Controlling for Alternative Potential Causes." Psychological Science 7, no. 6 (November 1996): 337–42. http://dx.doi.org/10.1111/j.1467-9280.1996.tb00385.x.

Full text
Abstract:
In judging the efficacy of multiple causes of an effect, human performance has been found to deviate from the “normative”Δ P contingency rule However, in cases of multiple causes, that rule might not be normative, scientists and philosophers, for example, know that when judging a potential cause, one must control for all other potential causes. In an experiment in which they were shown trial-by-trial effects of two potential causes (which sometimes covaried), subjects used conditional rather than unconditional contingencies to rate the efficacy of the causes. A conditional contingency analysis may explain various “nonnormative” cue-integration effects (e.g., discounting) found in the literature and is relevant to how people unravel Simpson's paradox.
APA, Harvard, Vancouver, ISO, and other styles
24

Bolognini, Nadia, Fabrizio Leor, Claudia Passamonti, Barry E. Stein, and Elisabetta Làdavas. "Multisensory-Mediated Auditory Localization." Perception 36, no. 10 (October 2007): 1477–85. http://dx.doi.org/10.1068/p5846.

Full text
Abstract:
Multisensory integration is a powerful mechanism for maximizing sensitivity to sensory events. We examined its effects on auditory localization in healthy human subjects. The specific objective was to test whether the relative intensity and location of a seemingly irrelevant visual stimulus would influence auditory localization in accordance with the inverse effectiveness and spatial rules of multisensory integration that have been developed from neurophysiological studies with animals [Stein and Meredith, 1993 The Merging of the Senses (Cambridge, MA: MIT Press)]. Subjects were asked to localize a sound in one condition in which a neutral visual stimulus was either above threshold (supra-threshold) or at threshold. In both cases the spatial disparity of the visual and auditory stimuli was systematically varied. The results reveal that stimulus salience is a critical factor in determining the effect of a neutral visual cue on auditory localization. Visual bias and, hence, perceptual translocation of the auditory stimulus appeared when the visual stimulus was supra-threshold, regardless of its location. However, this was not the case when the visual stimulus was at threshold. In this case, the influence of the visual cue was apparent only when the two cues were spatially coincident and resulted in an enhancement of stimulus localization. These data suggest that the brain uses multiple strategies to integrate multisensory information.
APA, Harvard, Vancouver, ISO, and other styles
25

Parise, Cesare V., Vanessa Harrar, Marc O. Ernst, and Charles Spence. "Cross-correlation between Auditory and Visual Signals Promotes Multisensory Integration." Multisensory Research 26, no. 3 (2013): 307–16. http://dx.doi.org/10.1163/22134808-00002417.

Full text
Abstract:
Humans are equipped with multiple sensory channels that provide both redundant and complementary information about the objects and events in the world around them. A primary challenge for the brain is therefore to solve the ‘correspondence problem’, that is, to bind those signals that likely originate from the same environmental source, while keeping separate those unisensory inputs that likely belong to different objects/events. Whether multiple signals have a common origin or not must, however, be inferred from the signals themselves through a causal inference process. Recent studies have demonstrated that cross-correlation, that is, the similarity in temporal structure between unimodal signals, represents a powerful cue for solving the correspondence problem in humans. Here we provide further evidence for the role of the temporal correlation between auditory and visual signals in multisensory integration. Capitalizing on the well-known fact that sensitivity to crossmodal conflict is inversely related to the strength of coupling between the signals, we measured sensitivity to crossmodal spatial conflicts as a function of the cross-correlation between the temporal structures of the audiovisual signals. Observers’ performance was systematically modulated by the cross-correlation, with lower sensitivity to crossmodal conflict being measured for correlated as compared to uncorrelated audiovisual signals. These results therefore provide support for the claim that cross-correlation promotes multisensory integration. A Bayesian framework is proposed to interpret the present results, whereby stimulus correlation is represented on the prior distribution of expected crossmodal co-occurrence.
APA, Harvard, Vancouver, ISO, and other styles
26

Hahn, Thomas P., and Scott A. MacDougall-Shackleton. "Adaptive specialization, conditional plasticity and phylogenetic history in the reproductive cue response systems of birds." Philosophical Transactions of the Royal Society B: Biological Sciences 363, no. 1490 (August 7, 2007): 267–86. http://dx.doi.org/10.1098/rstb.2007.2139.

Full text
Abstract:
Appropriately timed integration of breeding into avian annual cycles is critical to both reproductive success and survival. The mechanisms by which birds regulate timing of breeding depend on environmental cue response systems that regulate both when birds do and do not breed. Despite there being multiple possible explanations for birds' abilities to time breeding appropriately in different environments, and for the distribution of different cue response system characteristics among taxa, many studies infer that adaptive specialization of cue response systems has occurred without explicitly considering the alternatives. In this paper, we make explicit three hypotheses concerning the timing of reproduction and distribution of cue response characteristics among taxa: adaptive specialization; conditional plasticity; and phylogenetic history. We emphasize in particular that although conditional plasticity built into avian cue response systems (e.g. differing rates of gonadal development and differing latencies until onset of photorefractoriness) may lead to maladaptive annual cycles in some novel circumstances, this plasticity also can lead to what appear to be adaptively specialized cue response systems if not viewed in a comparative context. We use a comparative approach to account for the distribution of one important feature of avian reproductive cue response systems, photorefractoriness. Analysis of the distribution within songbirds of one criterion for absolute photorefractoriness, the spontaneous regression of the gonads without any decline in photoperiod, reveals that a failure to display this trait probably represents an adaptive specialization to facilitate a flexible reproductive schedule. More finely resolved analysis of both criteria for absolute photorefractoriness (the second being total lack of a reproductive response even to constant light after gonadal regression has occurred) within the cardueline finches not only provides further confirmation of this interpretation, but also indicates that these two criteria for photorefractoriness can be, and have been, uncoupled in some taxa. We suggest that careful comparative studies at different phylogenetic scales will be extremely valuable for distinguishing between adaptive specialization and non-adaptive explanations, such as phylogenetic history as explanations of cue response traits in particular taxa. We also suggest that particular focus on taxa in which individuals may breed on very different photoperiods (latitudes or times of year) in different years should be particularly valuable in identifying the range of environmental conditions across which conditionally plastic cue responses can be adaptive.
APA, Harvard, Vancouver, ISO, and other styles
27

Toni, Ivan, Nadim J. Shah, Gereon R. Fink, Daniel Thoenissen, Richard E. Passingham, and Karl Zilles. "Multiple Movement Representations in the Human Brain: An Event-Related fMRI Study." Journal of Cognitive Neuroscience 14, no. 5 (July 1, 2002): 769–84. http://dx.doi.org/10.1162/08989290260138663.

Full text
Abstract:
Neurovascular correlates of response preparation have been investigated in human neuroimaging studies. However, conventional neuroimaging cannot distinguish, within the same trial, between areas involved in response selection and/ or response execution and areas specifically involved in response preparation. The specific contribution of parietal and frontal areas to motor preparation has been explored in electrophysiological studies in monkey. However, the associative nature of sensorimotor tasks calls for the additional contributions of other cortical regions. In this article, we have investigated the functional anatomy of movement representations in the context of an associative visuomotor task with instructed delays. Neural correlates of movement representations have been assessed by isolating preparatory activity that is independent from the performance of an actual motor act, or from the presence of a response's target. Movement instruction (specified by visual cues) and motor performance (specified by an auditory cue) were separated by a variable delay period. We have used whole-brain event-related fMRI to measure human brain activity during the performance of such a task. We have focused our analysis on specific preparatory activity, defined as a sustained response over variable delay periods between a transient visual instruction cue and a brief motor response, temporally independent from the transient events. Behavioral and electrophysiological controls ensured that preparatory activity was not contaminated by overt motor responses or working memory processes. We report suggestive evidence for multiple movement representations in the human brain. Specific sustained activity in preparation for an action was found not only in parieto-frontal regions but also in extrastriate areas and in the posterior portion of the superior temporal sulcus. We suggest that goal-directed preparatory activity relies on both visuo-motor and visuoperceptual areas. These findings point to a functional anatomical basis for the integration of perceptual and executive processes.
APA, Harvard, Vancouver, ISO, and other styles
28

Murphy, Aidan P., Hiroshi Ban, and Andrew E. Welchman. "Integration of texture and disparity cues to surface slant in dorsal visual cortex." Journal of Neurophysiology 110, no. 1 (July 1, 2013): 190–203. http://dx.doi.org/10.1152/jn.01055.2012.

Full text
Abstract:
Reliable estimation of three-dimensional (3D) surface orientation is critical for recognizing and interacting with complex 3D objects in our environment. Human observers maximize the reliability of their estimates of surface slant by integrating multiple depth cues. Texture and binocular disparity are two such cues, but they are qualitatively very different. Existing evidence suggests that representations of surface tilt from each of these cues coincide at the single-neuron level in higher cortical areas. However, the cortical circuits responsible for 1) integration of such qualitatively distinct cues and 2) encoding the slant component of surface orientation have not been assessed. We tested for cortical responses related to slanted plane stimuli that were defined independently by texture, disparity, and combinations of these two cues. We analyzed the discriminability of functional MRI responses to two slant angles using multivariate pattern classification. Responses in visual area V3B/KO to stimuli containing congruent cues were more discriminable than those elicited by single cues, in line with predictions based on the fusion of slant estimates from component cues. This improvement was specific to congruent combinations of cues: incongruent cues yielded lower decoding accuracies, which suggests the robust use of individual cues in cases of large cue conflicts. These data suggest that area V3B/KO is intricately involved in the integration of qualitatively dissimilar depth cues.
APA, Harvard, Vancouver, ISO, and other styles
29

Stein, Barry E., Liping Yu, Jinghong Xu, and Benjamin A. Rowland. "Plasticity in the acquisition of multisensory integration capabilities in superior colliculus." Seeing and Perceiving 25 (2012): 133. http://dx.doi.org/10.1163/187847612x647658.

Full text
Abstract:
The multisensory integration capabilities of superior colliculus (SC) neurons are normally acquired during early postnatal life and adapted to the environment in which they will be used. Recent evidence shows that they can even be acquired in adulthood, and require neither consciousness nor any of the reinforcement contingencies generally associated with learning. This process is believed to be based on Hebbian mechanisms, whereby the temporal coupling of multiple sensory inputs initiates development of a means of integrating their information. This predicts that co-activation of those input channels is sufficient to induce multisensory integration capabilities regardless of the specific spatiotemporal properties of the initiating stimuli. However, one might expect that the stimuli to be integrated should be consonant with the functional role of the neurons involved. For the SC, this would involve stimuli that can be localized. Experience with a non-localizable cue in one modality (e.g., ambient sound) and a discrete stimulus in another (e.g., a light flash) should not be sufficient for this purpose. Indeed, experiments with cats reared in omnidirectional sound (effectively masking discrete auditory events) reveal that the simple co-activation of two sensory input channels is not sufficient for this purpose. The data suggest that experience with the kinds of cross-modal events that facilitate the role of the SC in detecting, locating, and orienting to localized external events is a guiding factor in this maturational process. Supported by NIH grants NS 036916 and EY016716.
APA, Harvard, Vancouver, ISO, and other styles
30

Kim, HyungGoo R., Xaq Pitkow, Dora E. Angelaki, and Gregory C. DeAngelis. "A simple approach to ignoring irrelevant variables by population decoding based on multisensory neurons." Journal of Neurophysiology 116, no. 3 (September 1, 2016): 1449–67. http://dx.doi.org/10.1152/jn.00005.2016.

Full text
Abstract:
Sensory input reflects events that occur in the environment, but multiple events may be confounded in sensory signals. For example, under many natural viewing conditions, retinal image motion reflects some combination of self-motion and movement of objects in the world. To estimate one stimulus event and ignore others, the brain can perform marginalization operations, but the neural bases of these operations are poorly understood. Using computational modeling, we examine how multisensory signals may be processed to estimate the direction of self-motion (i.e., heading) and to marginalize out effects of object motion. Multisensory neurons represent heading based on both visual and vestibular inputs and come in two basic types: “congruent” and “opposite” cells. Congruent cells have matched heading tuning for visual and vestibular cues and have been linked to perceptual benefits of cue integration during heading discrimination. Opposite cells have mismatched visual and vestibular heading preferences and are ill-suited for cue integration. We show that decoding a mixed population of congruent and opposite cells substantially reduces errors in heading estimation caused by object motion. In addition, we present a general formulation of an optimal linear decoding scheme that approximates marginalization and can be implemented biologically by simple reinforcement learning mechanisms. We also show that neural response correlations induced by task-irrelevant variables may greatly exceed intrinsic noise correlations. Overall, our findings suggest a general computational strategy by which neurons with mismatched tuning for two different sensory cues may be decoded to perform marginalization operations that dissociate possible causes of sensory inputs.
APA, Harvard, Vancouver, ISO, and other styles
31

Sanada, Takahisa M., Jerry D. Nguyenkim, and Gregory C. DeAngelis. "Representation of 3-D surface orientation by velocity and disparity gradient cues in area MT." Journal of Neurophysiology 107, no. 8 (April 15, 2012): 2109–22. http://dx.doi.org/10.1152/jn.00578.2011.

Full text
Abstract:
Neural coding of the three-dimensional (3-D) orientation of planar surface patches may be an important intermediate step in constructing representations of complex 3-D surface structure. Spatial gradients of binocular disparity, image velocity, and texture provide potent cues to the 3-D orientation (tilt and slant) of planar surfaces. Previous studies have described neurons in both dorsal and ventral stream areas that are selective for surface tilt based on one or more of these gradient cues. However, relatively little is known about whether single neurons provide consistent information about surface orientation from multiple gradient cues. Moreover, it is unclear how neural responses to combinations of surface orientation cues are related to responses to the individual cues. We measured responses of middle temporal (MT) neurons to random dot stimuli that simulated planar surfaces at a variety of tilts and slants. Four cue conditions were tested: disparity, velocity, and texture gradients alone, as well as all three gradient cues combined. Many neurons showed robust tuning for surface tilt based on disparity and velocity gradients, with relatively little selectivity for texture gradients. Some neurons showed consistent tilt preferences for disparity and velocity cues, whereas others showed large discrepancies. Responses to the combined stimulus were generally well described as a weighted linear sum of responses to the individual cues, even when disparity and velocity preferences were discrepant. These findings suggest that area MT contains a rudimentary representation of 3-D surface orientation based on multiple cues, with single neurons implementing a simple cue integration rule.
APA, Harvard, Vancouver, ISO, and other styles
32

Fukushima, Makoto, Alex M. Doyle, Matthew P. Mullarkey, Mortimer Mishkin, and Bruno B. Averbeck. "Distributed acoustic cues for caller identity in macaque vocalization." Royal Society Open Science 2, no. 12 (December 2015): 150432. http://dx.doi.org/10.1098/rsos.150432.

Full text
Abstract:
Individual primates can be identified by the sound of their voice. Macaques have demonstrated an ability to discern conspecific identity from a harmonically structured ‘coo’ call. Voice recognition presumably requires the integrated perception of multiple acoustic features. However, it is unclear how this is achieved, given considerable variability across utterances. Specifically, the extent to which information about caller identity is distributed across multiple features remains elusive. We examined these issues by recording and analysing a large sample of calls from eight macaques. Single acoustic features, including fundamental frequency, duration and Weiner entropy, were informative but unreliable for the statistical classification of caller identity. A combination of multiple features, however, allowed for highly accurate caller identification. A regularized classifier that learned to identify callers from the modulation power spectrum of calls found that specific regions of spectral–temporal modulation were informative for caller identification. These ranges are related to acoustic features such as the call’s fundamental frequency and FM sweep direction. We further found that the low-frequency spectrotemporal modulation component contained an indexical cue of the caller body size. Thus, cues for caller identity are distributed across identifiable spectrotemporal components corresponding to laryngeal and supralaryngeal components of vocalizations, and the integration of those cues can enable highly reliable caller identification. Our results demonstrate a clear acoustic basis by which individual macaque vocalizations can be recognized.
APA, Harvard, Vancouver, ISO, and other styles
33

Ogawa, Akitoshi, and Emiliano Macaluso. "Multisensory interactions modulate response of V3A for depth-motion processing." Seeing and Perceiving 25 (2012): 76. http://dx.doi.org/10.1163/187847612x646974.

Full text
Abstract:
For the perception of an object moving in depth, stereoscopic signals provide us with an important spatial cue. However, other modalities can also contribute to the motion perception. For example, auditory signals generated by the moving object change according to the approaching/receding direction. Specifically, sound intensity increases/decreases, while sound frequency shifts to higher/lower frequencies (i.e., the Doppler effect). The integration of these unisensory cues may enhance the perception of motion direction by modulating activity within specific brain areas. This fMRI study assessed the interaction between auditory and visual depth-cues on the response of V3A. The task was to discriminate the direction of an auditory target that moved in depth without any azimuth shift. Orthogonally to the volume change, the sound frequency could either rise or fall, yielding to Doppler ‘matched vs. unmatched’ conditions. On two-thirds of the trials, the sound was synchronously coupled with a task-irrelevant visual stimulus moving either forward or backward (i.e., a ball expanding or contracting), thus leading to audio–visual ‘congruent vs. incongruent’ signals of motion. These conditions were presented either in ‘2D or 3D’ viewing. The behavioral data showed the best motion-direction discrimination performance was obtained in Doppler matched and audio–visual congruent condition. In V3A, we found the expected responses to stereoscopic cues and, most importantly, maximal activation was found for 3D trials comprising matched Doppler and congruent audio–visual signals. These results demonstrate that the perception of objects moving in depth entails the integration of multiple unisensory cues. These support that the integration can affect activity in brain regions traditionally considered as unimodal.
APA, Harvard, Vancouver, ISO, and other styles
34

Crane, Benjamin T. "Effect of eye position during human visual-vestibular integration of heading perception." Journal of Neurophysiology 118, no. 3 (September 1, 2017): 1609–21. http://dx.doi.org/10.1152/jn.00037.2017.

Full text
Abstract:
Visual and inertial stimuli provide heading discrimination cues. Integration of these multisensory stimuli has been demonstrated to depend on their relative reliability. However, the reference frame of visual stimuli is eye centered while inertia is head centered, and it remains unclear how these are reconciled with combined stimuli. Seven human subjects completed a heading discrimination task consisting of a 2-s translation with a peak velocity of 16 cm/s. Eye position was varied between 0° and ±25° left/right. Experiments were done with inertial motion, visual motion, or a combined visual-inertial motion. Visual motion coherence varied between 35% and 100%. Subjects reported whether their perceived heading was left or right of the midline in a forced-choice task. With the inertial stimulus the eye position had an effect such that the point of subjective equality (PSE) shifted 4.6 ± 2.4° in the gaze direction. With the visual stimulus the PSE shift was 10.2 ± 2.2° opposite the gaze direction, consistent with retinotopic coordinates. Thus with eccentric eye positions the perceived inertial and visual headings were offset ~15°. During the visual-inertial conditions the PSE varied consistently with the relative reliability of these stimuli such that at low visual coherence the PSE was similar to that of the inertial stimulus and at high coherence it was closer to the visual stimulus. On average, the inertial stimulus was weighted near Bayesian ideal predictions, but there was significant deviation from ideal in individual subjects. These findings support visual and inertial cue integration occurring in independent coordinate systems. NEW & NOTEWORTHY In multiple cortical areas visual heading is represented in retinotopic coordinates while inertial heading is in body coordinates. It remains unclear whether multisensory integration occurs in a common coordinate system. The experiments address this using a multisensory integration task with eccentric gaze positions making the effect of coordinate systems clear. The results indicate that the coordinate systems remain separate to the perceptual level and that during the multisensory task the perception depends on relative stimulus reliability.
APA, Harvard, Vancouver, ISO, and other styles
35

Ritter, Sean P. A., Logan A. Brand, Shelby L. Vincent, Albert Remus R. Rosana, Allison C. Lewis, Denise S. Whitford, and George W. Owttrim. "Multiple Light-Dark Signals Regulate Expression of the DEAD-Box RNA Helicase CrhR in Synechocystis PCC 6803." Cells 11, no. 21 (October 27, 2022): 3397. http://dx.doi.org/10.3390/cells11213397.

Full text
Abstract:
Since oxygenic photosynthesis evolved in the common ancestor of cyanobacteria during the Archean, a range of sensing and response strategies evolved to allow efficient acclimation to the fluctuating light conditions experienced in the diverse environments they inhabit. However, how these regulatory mechanisms are assimilated at the molecular level to coordinate individual gene expression is still being elucidated. Here, we demonstrate that integration of a series of three distinct light signals generate an unexpectedly complex network regulating expression of the sole DEAD-box RNA helicase, CrhR, encoded in Synechocystis sp. PCC 6803. The mechanisms function at the transcriptional, translational and post-translation levels, fine-tuning CrhR abundance to permit rapid acclimation to fluctuating light and temperature regimes. CrhR abundance is enhanced 15-fold by low temperature stress. We initially confirmed that the primary mechanism controlling crhR transcript accumulation at 20 °C requires a light quantity-driven reduction of the redox poise in the vicinity of the plastoquinone pool. Once transcribed, a specific light quality cue, a red light signal, was required for crhR translation, far-red reversal of which indicates a phytochrome-mediated mechanism. Examination of CrhR repression at 30 °C revealed that a redox- and light quality-independent light signal was required to initiate CrhR degradation. The crucial role of light was further revealed by the observation that dark conditions superseded the light signals required to initiate each of these regulatory processes. The findings reveal an unexpected complexity of light-dark sensing and signaling that regulate expression of an individual gene in cyanobacteria, an integrated mechanism of environmental perception not previously reported.
APA, Harvard, Vancouver, ISO, and other styles
36

Pearson, Hilary, and Jonathan Wilbiks. "Effects of Audiovisual Memory Cues on Working Memory Recall." Vision 5, no. 1 (March 19, 2021): 14. http://dx.doi.org/10.3390/vision5010014.

Full text
Abstract:
Previous studies have focused on topics such as multimodal integration and object discrimination, but there is limited research on the effect of multimodal learning in memory. Perceptual studies have shown facilitative effects of multimodal stimuli for learning; the current study aims to determine whether this effect persists with memory cues. The purpose of this study was to investigate the effect that audiovisual memory cues have on memory recall, as well as whether the use of multiple memory cues leads to higher recall. The goal was to orthogonally evaluate the effect of the number of self-generated memory cues (one or three), and the modality of the self-generated memory-cue (visual: written words, auditory: spoken words, or audiovisual). A recall task was administered where participants were presented with their self-generated memory cues and asked to determine the target word. There was a significant main effect for number of cues, but no main effect for modality. A secondary goal of this study was to determine which types of memory cues result in the highest recall. Self-reference cues resulted in the highest accuracy score. This study has applications to improving academic performance by using the most efficient learning techniques.
APA, Harvard, Vancouver, ISO, and other styles
37

Yow, W. Quin, Jiawen Lee, and Xiaoqian Li. "AGE-RELATED DECLINES IN SOCIAL COGNITIVE PROCESSES OF OLDER ADULTS." Innovation in Aging 3, Supplement_1 (November 2019): S882—S883. http://dx.doi.org/10.1093/geroni/igz038.3232.

Full text
Abstract:
Abstract Despite current literature suggesting that various social cognitive processes seem to be impaired in late adulthood, e.g., processing of social gaze cues, the trajectory decline of social cognition in late adulthood is not well understood (e.g., Grainger et al., 2018; Paal & Bereczkei, 2007). As part of a multi-institutional research project, we began to systematically investigate whether there is age-related decline in older adults’ ability to infer others’ mental states, integrate multiple referential cues, and identify emotional states of others using prosodic cues. Sixteen older adults aged 71-85, of which 9 were cognitively healthy and 7 with mild-to-moderate dementia, and 7 younger adults aged 19-37 underwent three tasks. In a theory-of-mind story task, participants answered true/false questions about the beliefs of the protagonists in the stories. A cue integration task assessed participants’ ability to integrate the experimenter’s gaze and semantic cues to identify a referent object. In an emotion-prosody task, participants judged whether the speaker sounded happy or sad in low-pass filtered audio. Non-parametric tests revealed that younger adults outperformed both groups of older adults (both ps=.001) in inferring the protagonists’ beliefs in the stories. Younger adults were also better and more accurate than both groups of older adults in integrating cues to identify the referent object and in using prosodic cues to identify emotional states respectively (ps<.001). Both groups of older adults did not differ significantly from each other in the tasks. These findings provide emerging and important insights into the decline of social cognitive processes in late adulthood.
APA, Harvard, Vancouver, ISO, and other styles
38

Naya, Yuji, He Chen, Cen Yang, and Wendy A. Suzuki. "Contributions of primate prefrontal cortex and medial temporal lobe to temporal-order memory." Proceedings of the National Academy of Sciences 114, no. 51 (November 30, 2017): 13555–60. http://dx.doi.org/10.1073/pnas.1712711114.

Full text
Abstract:
Neuropsychological and neurophysiological studies have emphasized the role of the prefrontal cortex (PFC) in maintaining information about the temporal order of events or items for upcoming actions. However, the medial temporal lobe (MTL) has also been considered critical to bind individual events or items to their temporal context in episodic memory. Here we characterize the contributions of these brain areas by comparing single-unit activity in the dorsal and ventral regions of macaque lateral PFC (d-PFC and v-PFC) with activity in MTL areas including the hippocampus (HPC), entorhinal cortex, and perirhinal cortex (PRC) as well as in area TE during the encoding phase of a temporal-order memory task. The v-PFC cells signaled specific items at particular time periods of the task. By contrast, MTL cortical cells signaled specific items across multiple time periods and discriminated the items between time periods by modulating their firing rates. Analysis of the temporal dynamics of these signals showed that the conjunctive signal of item and temporal-order information in PRC developed earlier than that seen in v-PFC. During the delay interval between the two cue stimuli, while v-PFC provided prominent stimulus-selective delay activity, MTL areas did not. Both regions of PFC and HPC exhibited an incremental timing signal that appeared to represent the continuous passage of time during the encoding phase. However, the incremental timing signal in HPC was more prominent than that observed in PFC. These results suggest that PFC and MTL contribute to the encoding of the integration of item and timing information in distinct ways.
APA, Harvard, Vancouver, ISO, and other styles
39

Massaro, Dominic W. "Integrating cues in speech perception." Behavioral and Brain Sciences 21, no. 2 (April 1998): 275. http://dx.doi.org/10.1017/s0140525x98391177.

Full text
Abstract:
Sussman et al. describe an ecological property of the speech signal that is putatively functional in perception. An important issue, however, is whether their putative cue is an emerging feature or whether the second formant (F2) onset and the F2 vowel actually provide independent cues to perceptual categorization. Regardless of the outcome of this issue, an important goal of speech research is to understand how multiple cues are evaluated and integrated to achieve categorization.
APA, Harvard, Vancouver, ISO, and other styles
40

Bettig, Bernhard, and Vikram Bapat. "Integrating multiple information representations in a single CAD/CAM/CAE environment." Engineering with Computers 22, no. 1 (July 11, 2006): 11–23. http://dx.doi.org/10.1007/s00366-006-0025-y.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Petersen, Kia Vest, Jan Martinussen, Peter Ruhdal Jensen, and Christian Solem. "Repetitive, Marker-Free, Site-Specific Integration as a Novel Tool for Multiple Chromosomal Integration of DNA." Applied and Environmental Microbiology 79, no. 12 (March 29, 2013): 3563–69. http://dx.doi.org/10.1128/aem.00346-13.

Full text
Abstract:
ABSTRACTWe present a tool for repetitive, marker-free, site-specific integration inLactococcus lactis, in which a nonreplicating plasmid vector (pKV6) carrying a phage attachment site (attP) can be integrated into a bacterial attachment site (attB). The novelty of the tool described here is the inclusion of a minimal bacterial attachment site (attBmin), two mutatedloxPsequences (lox66andlox71) allowing for removal of undesirable vector elements (antibiotic resistance marker), and a counterselection marker (oroP) for selection ofloxPrecombination on the pKV6 vector. When transformed intoL. lactisexpressing the phage TP901-1 integrase, pKV6 integrates with high frequency into the chromosome, where it is flanked byattLandattRhybrid attachment sites. After expression of Cre recombinase from a plasmid that is not able to replicate inL. lactis,loxPrecombinants can be selected for by using 5-fluoroorotic acid. The introducedattBminsite can subsequently be used for a second round of integration. To examine ifattPrecombination was specific to theattBsite, integration was performed in strains containing theattB,attL, andattRsites or theattLandattRsites only. OnlyattP-attBrecombination was observed when all three sites were present. In the absence of theattBsite, a low frequency ofattP-attLrecombination was observed. To demonstrate the functionality of the system, the xylose utilization genes (xylABRandxylT) fromL. lactisstrain KF147 were integrated into the chromosome ofL. lactisstrain MG1363 in two steps.
APA, Harvard, Vancouver, ISO, and other styles
42

Steiger, Sandra, Thomas Schmitt, and H. Martin Schaefer. "The origin and dynamic evolution of chemical information transfer." Proceedings of the Royal Society B: Biological Sciences 278, no. 1708 (December 22, 2010): 970–79. http://dx.doi.org/10.1098/rspb.2010.2285.

Full text
Abstract:
Although chemical communication is the most widespread form of communication, its evolution and diversity are not well understood. By integrating studies of a wide range of terrestrial plants and animals, we show that many chemicals are emitted, which can unintentionally provide information (cues) and, therefore, act as direct precursors for the evolution of intentional communication (signals). Depending on the content, design and the original function of the cue, there are predictable ways that selection can enhance the communicative function of chemicals. We review recent progress on how efficacy-based selection by receivers leads to distinct evolutionary trajectories of chemical communication. Because the original function of a cue may channel but also constrain the evolution of functional communication, we show that a broad perspective on multiple selective pressures acting upon chemicals provides important insights into the origin and dynamic evolution of chemical information transfer. Finally, we argue that integrating chemical ecology into communication theory may significantly enhance our understanding of the evolution, the design and the content of signals in general.
APA, Harvard, Vancouver, ISO, and other styles
43

Park, Soojin, Marvin M. Chun, and Marcia K. Johnson. "Refreshing and Integrating Visual Scenes in Scene-selective Cortex." Journal of Cognitive Neuroscience 22, no. 12 (December 2010): 2813–22. http://dx.doi.org/10.1162/jocn.2009.21406.

Full text
Abstract:
Constructing a rich and coherent visual experience involves maintaining visual information that is not perceptually available in the current view. Recent studies suggest that briefly thinking about a stimulus (refreshing) can modulate activity in category-specific visual areas. Here, we tested the nature of such perceptually refreshed representations in the parahippocampal place area (PPA) and retrosplenial cortex (RSC) using fMRI. We asked whether a refreshed representation is specific to a restricted view of a scene, or more view-invariant. Participants saw a panoramic scene and were asked to think back to (refresh) a part of the scene after it disappeared. In some trials, the refresh cue appeared twice on the same side (e.g., refresh left–refresh left), and other trials, the refresh cue appeared on different sides (e.g., refresh left–refresh right). A control condition presented halves of the scene twice on same sides (e.g., perceive left–perceive left) or different sides (e.g., perceive left–perceive right). When scenes were physically repeated, both the PPA and RSC showed greater activation for the different-side repetition than the same-side repetition, suggesting view-specific representations. When participants refreshed scenes, the PPA showed view-specific activity just as in the physical repeat conditions, whereas RSC showed an equal amount of activation for different- and same-side conditions. This finding suggests that in RSC, refreshed representations were not restricted to a specific view of a scene, but extended beyond the target half into the entire scene. Thus, RSC activity associated with refreshing may provide a mechanism for integrating multiple views in the mind.
APA, Harvard, Vancouver, ISO, and other styles
44

Lambert, Jolanda M., Roger S. Bongers, and Michiel Kleerebezem. "Cre-lox-Based System for Multiple Gene Deletions and Selectable-Marker Removal in Lactobacillus plantarum." Applied and Environmental Microbiology 73, no. 4 (December 1, 2006): 1126–35. http://dx.doi.org/10.1128/aem.01473-06.

Full text
Abstract:
ABSTRACT The classic strategy to achieve gene deletion variants is based on double-crossover integration of nonreplicating vectors into the genome. In addition, recombination systems such as Cre-lox have been used extensively, mainly for eukaryotic organisms. This study presents the construction of a Cre-lox-based system for multiple gene deletions in Lactobacillus plantarum that could be adapted for use on gram-positive bacteria. First, an effective mutagenesis vector (pNZ5319) was constructed that allows direct cloning of blunt-end PCR products representing homologous recombination target regions. Using this mutagenesis vector, double-crossover gene replacement mutants could be readily selected based on their antibiotic resistance phenotype. In the resulting mutants, the target gene is replaced by a lox66-P32-cat-lox71 cassette, where lox66 and lox71 are mutant variants of loxP and P32-cat is a chloramphenicol resistance cassette. The lox sites serve as recognition sites for the Cre enzyme, a protein that belongs to the integrase family of site-specific recombinases. Thus, transient Cre recombinase expression in double-crossover mutants leads to recombination of the lox66-P32-cat-lox71 cassette into a double-mutant loxP site, called lox72, which displays strongly reduced recognition by Cre. The effectiveness of the Cre-lox-based strategy for multiple gene deletions was demonstrated by construction of both single and double gene deletions at the melA and bsh1 loci on the chromosome of the gram-positive model organism Lactobacillus plantarum WCFS1. Furthermore, the efficiency of the Cre-lox-based system in multiple gene replacements was determined by successive mutagenesis of the genetically closely linked loci melA and lacS2 in L. plantarum WCFS1. The fact that 99.4% of the clones that were analyzed had undergone correct Cre-lox resolution emphasizes the suitability of the system described here for multiple gene replacement and deletion strategies in a single genetic background.
APA, Harvard, Vancouver, ISO, and other styles
45

Yan, J., Q. Lu, Z. Fang, N. Li, L. Chen, and M. Pitt. "From building to city level dynamic digital Twin: a review from data management perspective." IOP Conference Series: Earth and Environmental Science 1101, no. 9 (November 1, 2022): 092033. http://dx.doi.org/10.1088/1755-1315/1101/9/092033.

Full text
Abstract:
Abstract The development of the digital twin (DT) has been focused greatly after the concept was brought from the manufacturing and aerospace areas. In the architectural, engineering, construction and facility management (AEC/FM) sector, DTs are capable of integrating heterogeneous metadata and cutting-edge technologies like artificial intelligence and machine learning to create a dynamic digital environment for various purposes. Although building information modelling (BIM) appears to be a significant contributor to DTs, one of the major limitations for DT development is how to construct and provide a shared data environment for all stakeholders to collaborate throughout the life cycle. Furthermore, as the stakeholders’ requirements range of DTs expands from a single building to multiple buildings and regional/city levels, the information and data management gaps (e.g., BIM and GIS data integration) are more challenging and critical. To address these gaps, this paper aims to 1) review the current data management for building and city level DTs from a technical perspective; 2) summarise their major data management issues from building to city levels based on the review; 3) introduce the concept of city-level Common Data Environment (CDE) that addresses the issues identified above, and discuss the possibilities of developing a CDE for a dynamic city-level DT.
APA, Harvard, Vancouver, ISO, and other styles
46

Abugharbieh, Khaldoon, and Hazem W. Marar. "Integrating multiple state‐of‐the‐art computer‐aided design tools in microelectronics circuit design classes." Computer Applications in Engineering Education 27, no. 5 (July 23, 2019): 1156–67. http://dx.doi.org/10.1002/cae.22143.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Shahabi, Mahmood, Mohammad Ali Ghorbani, Sujay Raghavendra Naganna, Sungwon Kim, Sinan Jasim Hadi, Samed Inyurt, Aitazaz Ahsan Farooque, and Zaher Mundher Yaseen. "Integration of Multiple Models with Hybrid Artificial Neural Network-Genetic Algorithm for Soil Cation-Exchange Capacity Prediction." Complexity 2022 (June 13, 2022): 1–15. http://dx.doi.org/10.1155/2022/3123475.

Full text
Abstract:
The potential of the soil to hold plant nutrients is governed by the cation-exchange capacity (CEC) of any soil. Estimating soil CEC aids in conventional soil management practices to replenish the soil solution that supports plant growth. In this study, a multiple model integration scheme supervised with a hybrid genetic algorithm-neural network (MM-GANN) was developed and employed to predict the accuracy of soil CEC in Tabriz plain, an arid region of Iran. The standalone models (i.e., artificial neural network (ANN) and extreme learning machine (ELM)) were implemented for incorporation into the MM-GANN. In addition, it was tested to enhance the prediction accuracy of the standalone models. The soil parameters such as clay, silt, pH, carbonate calcium equivalent (CCE), and soil organic matter (OM) were used as model inputs to predict soil CEC. With the use of several evaluation criteria, the results showed that the MM-GANN model involving the predictions of ELM and ANN models calibrated by considering all the soil parameters (e.g., Clay, OM, pH, silt, and CCE) as inputs provided superior soil CEC estimates with a Nash Sutcliffe Efficiency (NSE) = 0.87, Root Mean Square Error (RMSE) = 2.885, Mean Absolute Error (MAE) = 2.249, Mean Absolute Percentage Error (MAPE) = 12.072, and coefficient of determination (R2) = 0.884. The proposed MM-GANN model is a reliable intelligence-based approach for the assessment of soil quality parameters intended for sustainability and management prospects.
APA, Harvard, Vancouver, ISO, and other styles
48

SU, D., and M. WAKELAM. "Evolutionary optimization within an intelligent hybrid system for design integration." Artificial Intelligence for Engineering Design, Analysis and Manufacturing 13, no. 5 (November 1999): 351–63. http://dx.doi.org/10.1017/s0890060499135054.

Full text
Abstract:
An intelligent hybrid approach has been developed to integrate various stages in total design, including formulation of product design specifications, conceptual design, detail design, and manufacture. The integration is achieved by blending multiple artificial intelligence (AI) techniques and CAD/CAE/CAM into a single environment. It has been applied into power transmission system design. In addition to knowledge-based systems and artificial neural networks, another AI technique, genetic algorithms (GAs), are involved in the approach. The GA is used to conduct optimization tasks: (1) searching the best combination of design parameters to obtain optimum design of gears, and (2) optimization of the architecture of the artificial neural networks used in the hybrid system. In this paper, after a brief overview of the intelligent hybrid system, the GA applications are described in detail.
APA, Harvard, Vancouver, ISO, and other styles
49

Chakraverty, Devasmita, Donna B. Jeffe, and Robert H. Tai. "Transition Experiences in MD–PhD Programs." CBE—Life Sciences Education 17, no. 3 (September 2018): ar41. http://dx.doi.org/10.1187/cbe.17-08-0187.

Full text
Abstract:
MD–PhD training takes, on average, 8 years to complete and involves two transitions, an MD-preclinical to PhD-research phase and a PhD-research to MD-clinical phase. There is a paucity of research about MD–PhD students’ experiences during each transition. This study examined transition experiences reported by 48 MD–PhD students who had experienced at least one of these transitions during their training. We purposefully sampled medical schools across the United States to recruit participants. Semistructured interviews were audio-recorded and transcribed for analysis; items focused on academic and social experiences within and outside their programs. Using a phenomenological approach and analytic induction, we examined students’ transition experiences during their MD–PhD programs. Five broad themes emerged centering on multiple needs: mentoring, facilitating integration with students in each phase, integrating the curriculum to foster mastery of skills needed for each phase, awareness of cultural differences between MD and PhD training, and support. None of the respondents attributed their transition experiences to gender or race/ethnicity. Students emphasized the need for mentoring by MD–PhD faculty and better institutional and program supports to mitigate feelings of isolation and help students relearn knowledge for clinical clerkships and ease re-entry into the hospital culture, which differs substantially from the research culture.
APA, Harvard, Vancouver, ISO, and other styles
50

Allawy, Nabeel I., and Amjad B. Abdulghafour. "Integration of CAD/CAE/RP Environment for Developing a New Product in Medical Field." Engineering and Technology Journal 38, no. 9A (September 25, 2020): 1276–82. http://dx.doi.org/10.30684/etj.v38i9a.982.

Full text
Abstract:
Reconstruction of the mandible after severe trauma is one of the most difficult challenges facing oral and maxillofacial surgery. The mandible is an essential element in the appearance of the human face that gives the distinctive shape of the face, holds. This paper aims to propose a methodology that allows the surgeon to perform virtual surgery by investing engineering programs to place the implant by default and with high accuracy within the mandible based on the patient's medical data. The current study involved a 35-year-old man suffering from a traffic accident in the mandible with multiple fractures of the facial bones. Basically, an identification of the steps required to perform virtual surgery and modeling images from the CBCT technology has been done by using the software proposed in the research. The implant model is designed as a mesh model, allowing the patient to return to a normal position. Moreover, an application of FEA procedures using the Solidworks simulation software to test and verify the mechanical properties of the final transplant.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography