To see the other types of publications on this topic, follow the link: Actions processing.

Journal articles on the topic 'Actions processing'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Actions processing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Müsseler, Jochen, Silke Steininger, and Peter Wühr. "Can Actions Affect Perceptual Processing?" Quarterly Journal of Experimental Psychology Section A 54, no. 1 (February 2001): 137–54. http://dx.doi.org/10.1080/02724980042000057.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Cullen, Kathleen E., Jessica X. Brooks, and Soroush G. Sadeghi. "How Actions Alter Sensory Processing." Annals of the New York Academy of Sciences 1164, no. 1 (May 2009): 29–36. http://dx.doi.org/10.1111/j.1749-6632.2009.03866.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Heitger, Marcus H., Marc J. M. Macé, Jan Jastorff, Stephan P. Swinnen, and Guy A. Orban. "Cortical regions involved in the observation of bimanual actions." Journal of Neurophysiology 108, no. 9 (November 1, 2012): 2594–611. http://dx.doi.org/10.1152/jn.00408.2012.

Full text
Abstract:
Although we are beginning to understand how observed actions performed by conspecifics with a single hand are processed and how bimanual actions are controlled by the motor system, we know very little about the processing of observed bimanual actions. We used fMRI to compare the observation of bimanual manipulative actions with their unimanual components, relative to visual control conditions equalized for visual motion. Bimanual action observation did not activate any region specialized for processing visual signals related to this more elaborated action. On the contrary, observation of bimanual and unimanual actions activated similar occipito-temporal, parietal and premotor networks. However, whole-brain as well as region of interest (ROI) analyses revealed that this network functions differently under bimanual and unimanual conditions. Indeed, in bimanual conditions, activity in the network was overall more bilateral, especially in parietal cortex. In addition, ROI analyses indicated bilateral parietal activation patterns across hand conditions distinctly different from those at other levels of the action-observation network. These activation patterns suggest that while occipito-temporal and premotor levels are involved with processing the kinematics of the observed actions, the parietal cortex is more involved in the processing of static, postural aspects of the observed action. This study adds bimanual cooperation to the growing list of distinctions between parietal and premotor cortex regarding factors affecting visual processing of observed actions.
APA, Harvard, Vancouver, ISO, and other styles
4

Demetre, James D., and Peter M. Vietze. "Discrepancy processing of actions in infancy." Infant Behavior and Development 9 (April 1986): 98. http://dx.doi.org/10.1016/s0163-6383(86)80100-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Rueschemeyer, Shirley-Ann, Oliver Lindemann, Daan van Rooij, Wessel van Dam, and Harold Bekkering. "Effects of Intentional Motor Actions on Embodied Language Processing." Experimental Psychology 57, no. 4 (December 1, 2010): 260–66. http://dx.doi.org/10.1027/1618-3169/a000031.

Full text
Abstract:
Embodied theories of language processing suggest that this motor simulation is an automatic and necessary component of meaning representation. If this is the case, then language and action systems should be mutually dependent (i.e., motor activity should selectively modulate processing of words with an action-semantic component). In this paper, we investigate in two experiments whether evidence for mutual dependence can be found using a motor priming paradigm. Specifically, participants performed either an intentional or a passive motor task while processing words denoting manipulable and nonmanipulable objects. The performance rates (Experiment 1) and response latencies (Experiment 2) in a lexical-decision task reveal that participants performing an intentional action were positively affected in the processing of words denoting manipulable objects as compared to nonmanipulable objects. This was not the case if participants performed a secondary passive motor action (Experiment 1) or did not perform a secondary motor task (Experiment 2). The results go beyond previous research showing that language processes involve motor systems to demonstrate that the execution of motor actions has a selective effect on the semantic processing of words. We suggest that intentional actions activate specific parts of the neural motor system, which are also engaged for lexical-semantic processing of action-related words and discuss the beneficial versus inhibitory nature of this relationship. The results provide new insights into the embodiment of language and the bidirectionality of effects between language and action processing.
APA, Harvard, Vancouver, ISO, and other styles
6

Beauprez, Sophie-Anne, Yannick Blandin, Yves Almecija, and Christel Bidet-Ildei. "Physical and observational practices of unusual actions prime action verb processing." Brain and Cognition 138 (February 2020): 103630. http://dx.doi.org/10.1016/j.bandc.2019.103630.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Heil, Lieke, Olympia Colizoli, Egbert Hartstra, Johan Kwisthout, Stan van Pelt, Iris van Rooij, and Harold Bekkering. "Processing of Prediction Errors in Mentalizing Areas." Journal of Cognitive Neuroscience 31, no. 6 (June 2019): 900–912. http://dx.doi.org/10.1162/jocn_a_01381.

Full text
Abstract:
When seeing people perform actions, we are able to quickly predict the action's outcomes. These predictions are not solely based on the observed actions themselves but utilize our prior knowledge of others. It has been suggested that observed outcomes that are not in line with these predictions result in prediction errors, which require additional processing to be integrated or updated. However, there is no consensus on whether this is indeed the case for the kind of high-level social–cognitive processes involved in action observation. In this fMRI study, we investigated whether observation of unexpected outcomes causes additional activation in line with the processing of prediction errors and, if so, whether this activation overlaps with activation in brain areas typically associated with social–cognitive processes. In the first part of the experiment, participants watched animated movies of two people playing a bowling game, one experienced and one novice player. In cases where the player's score was higher or lower than expected based on their skill level, there was increased BOLD activity in areas that were also activated during a theory of mind task that participants performed in the second part of the experiment. These findings are discussed in the light of different theoretical accounts of human social–cognitive processing.
APA, Harvard, Vancouver, ISO, and other styles
8

Ianì, Francesco, Teresa Limata, Giuliana Mazzoni, and Monica Bucciarelli. "Observer’s body posture affects processing of other humans’ actions." Quarterly Journal of Experimental Psychology 74, no. 9 (March 29, 2021): 1595–604. http://dx.doi.org/10.1177/17470218211003518.

Full text
Abstract:
Action observation triggers by default a mental simulation of action unfolding in time. We assumed that this simulation is “embodied”: the body is the medium through which observer’s sensorimotor modalities simulate the observed action. The participants in two experiments observed videos, each depicting the central part of an action performed by an actress on an object (e.g., answering the phone) and soon after each video they observed a photo portraying a state of the action not observed in the video, either depicting the initial part or the final part of the whole action. Their task was to evaluate whether the photo portrayed something before (backward photo) or after the action in the video (forward photo). Results showed that evaluation of forward photos was faster than evaluation of backward photos (Experiment 1). Crucially, participants’ body posture modulated this effect: keeping the hands crossed behind the back interfered with forward simulations (Experiment 2). These results speak about the role of the observer’s body posture in processing other people’s actions.
APA, Harvard, Vancouver, ISO, and other styles
9

Kroczek, Leon O. H., Angelika Lingnau, Valentin Schwind, Christian Wolff, and Andreas Mühlberger. "Angry facial expressions bias towards aversive actions." PLOS ONE 16, no. 9 (September 1, 2021): e0256912. http://dx.doi.org/10.1371/journal.pone.0256912.

Full text
Abstract:
Social interaction requires fast and efficient processing of another person’s intentions. In face-to-face interactions, aversive or appetitive actions typically co-occur with emotional expressions, allowing an observer to anticipate action intentions. In the present study, we investigated the influence of facial emotions on the processing of action intentions. Thirty-two participants were presented with video clips showing virtual agents displaying a facial emotion (angry vs. happy) while performing an action (punch vs. fist-bump) directed towards the observer. During each trial, video clips stopped at varying durations of the unfolding action, and participants had to recognize the presented action. Naturally, participants’ recognition accuracy improved with increasing duration of the unfolding actions. Interestingly, while facial emotions did not influence accuracy, there was a significant influence on participants’ action judgements. Participants were more likely to judge a presented action as a punch when agents showed an angry compared to a happy facial emotion. This effect was more pronounced in short video clips, showing only the beginning of an unfolding action, than in long video clips, showing near-complete actions. These results suggest that facial emotions influence anticipatory processing of action intentions allowing for fast and adaptive responses in social interactions.
APA, Harvard, Vancouver, ISO, and other styles
10

Gerson, Sarah A., Harold Bekkering, and Sabine Hunnius. "Short-term Motor Training, but Not Observational Training, Alters Neurocognitive Mechanisms of Action Processing in Infancy." Journal of Cognitive Neuroscience 27, no. 6 (June 2015): 1207–14. http://dx.doi.org/10.1162/jocn_a_00774.

Full text
Abstract:
The role of motor experience in the processing of perceived actions is hotly debated on both behavioral (e.g., action understanding) and neural (e.g., activation of the motor system) levels of interpretation. Whereas some researchers focus on the role of motor experience in the understanding of and motor activity associated with perceived actions, others emphasize the role of visual experience with the perceived actions. The question of whether prior firsthand motor experience is critical to motor system activation during perception of actions performed by others is best addressed through studies with infants who have a limited repertoire of motor actions. In this way, infants can receive motor or visual training with novel actions that are not mere recombinations of previously acquired actions. In this study, 10-month-old infants received active training with a motorically unfamiliar action that resulted in a distinct sound effect. They received observational experience with a second, similarly unfamiliar action. Following training, we assessed infants' neural motor activity via EEG while they listened to the sounds associated with the actions relative to a novel sound. We found a greater decrease in mu power to sounds associated with the motorically learned action than to those associated with the observed action that the infants had never produced. This effect was directly related to individual differences in the degree of motor learning via motor training. These findings indicate a unique effect of active experience on neural correlates of action perception.
APA, Harvard, Vancouver, ISO, and other styles
11

Badgaiyan, Rajendra D. "Executive control, willed actions, and nonconscious processing." Human Brain Mapping 9, no. 1 (2000): 38–41. http://dx.doi.org/10.1002/(sici)1097-0193(2000)9:1<38::aid-hbm4>3.0.co;2-t.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Papeo, Liuba, Cinzia Cecchetto, Giulia Mazzon, Giulia Granello, Tatiana Cattaruzza, Lorenzo Verriello, Roberto Eleopra, and Raffaella I. Rumiati. "The processing of actions and action-words in amyotrophic lateral sclerosis patients." Cortex 64 (March 2015): 136–47. http://dx.doi.org/10.1016/j.cortex.2014.10.007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Green, Patrick R., and Frank E. Pollick. "Recognising actions." Behavioral and Brain Sciences 25, no. 1 (February 2002): 106–7. http://dx.doi.org/10.1017/s0140525x02330024.

Full text
Abstract:
The ability to recognise the actions of conspecifics from displays of biological motion is an essential perceptual capacity. Physiological and psychological evidence suggest that the visual processing of biological motion involves close interaction between the dorsal and ventral systems. Norman's strong emphasis on the functional differences between these systems may impede understanding of their interactions.
APA, Harvard, Vancouver, ISO, and other styles
14

Press, Clare, Elena Gherri, Cecilia Heyes, and Martin Eimer. "Action Preparation Helps and Hinders Perception of Action." Journal of Cognitive Neuroscience 22, no. 10 (October 2010): 2198–211. http://dx.doi.org/10.1162/jocn.2009.21409.

Full text
Abstract:
Several theories of the mechanisms linking perception and action require that the links are bidirectional, but there is a lack of consensus on the effects that action has on perception. We investigated this by measuring visual event-related brain potentials to observed hand actions while participants prepared responses that were spatially compatible (e.g., both were on the left side of the body) or incompatible and action type compatible (e.g., both were finger taps) or incompatible, with observed actions. An early enhanced processing of spatially compatible stimuli was observed, which is likely due to spatial attention. This was followed by an attenuation of processing for both spatially and action type compatible stimuli, likely to be driven by efference copy signals that attenuate processing of predicted sensory consequences of actions. Attenuation was not response-modality specific; it was found for manual stimuli when participants prepared manual and vocal responses, in line with the hypothesis that action control is hierarchically organized. These results indicate that spatial attention and forward model prediction mechanisms have opposite, but temporally distinct, effects on perception. This hypothesis can explain the inconsistency of recent findings on action–perception links and thereby supports the view that sensorimotor links are bidirectional. Such effects of action on perception are likely to be crucial, not only for the control of our own actions but also in sociocultural interaction, allowing us to predict the reactions of others to our own actions.
APA, Harvard, Vancouver, ISO, and other styles
15

Barraclough, Nick E., Rebecca H. Keith, Dengke Xiao, Mike W. Oram, and David I. Perrett. "Visual Adaptation to Goal-directed Hand Actions." Journal of Cognitive Neuroscience 21, no. 9 (September 2009): 1805–19. http://dx.doi.org/10.1162/jocn.2008.21145.

Full text
Abstract:
Prolonged exposure to visual stimuli, or adaptation, often results in an adaptation “aftereffect” which can profoundly distort our perception of subsequent visual stimuli. This technique has been commonly used to investigate mechanisms underlying our perception of simple visual stimuli, and more recently, of static faces. We tested whether humans would adapt to movies of hands grasping and placing different weight objects. After adapting to hands grasping light or heavy objects, subsequently perceived objects appeared relatively heavier, or lighter, respectively. The aftereffects increased logarithmically with adaptation action repetition and decayed logarithmically with time. Adaptation aftereffects also indicated that perception of actions relies predominantly on view-dependent mechanisms. Adapting to one action significantly influenced the perception of the opposite action. These aftereffects can only be explained by adaptation of mechanisms that take into account the presence/absence of the object in the hand. We tested if evidence on action processing mechanisms obtained using visual adaptation techniques confirms underlying neural processing. We recorded monkey superior temporal sulcus (STS) single-cell responses to hand actions. Cells sensitive to grasping or placing typically responded well to the opposite action; cells also responded during different phases of the actions. Cell responses were sensitive to the view of the action and were dependent upon the presence of the object in the scene. We show here that action processing mechanisms established using visual adaptation parallel the neural mechanisms revealed during recording from monkey STS. Visual adaptation techniques can thus be usefully employed to investigate brain mechanisms underlying action perception.
APA, Harvard, Vancouver, ISO, and other styles
16

Hayashi, Teruaki, and Yukio Ohsawa. "Processing Combinatorial Thinking." International Journal of Knowledge and Systems Science 4, no. 3 (July 2013): 14–38. http://dx.doi.org/10.4018/ijkss.2013070102.

Full text
Abstract:
Innovators Market Game is a method for facilitating innovation by helping to create new ideas by combining existent ideas. In this game, participants play roles, think of new ideas and evaluate them. The roles are selected from the real world, e.g., police officers, transportation authority, government and so on. The Role-based Innovators Market Game proposed in this study is designed to lead innovative ideas, based on the defined factors. Its rules, acting roles, and the communication in the Role-based IMG make players more creative and imaginative rather than sheer freedom. This study proposes not only the way of creating new ideas, but also the process for making them practical, by including the step of Action Planning, where players further cultivate ideas to make practical scenario of actions. These two methods form the refined process of Innovators Marketplace, and help in contriving innovative ideas for the human society in discovering and solving practical problems.
APA, Harvard, Vancouver, ISO, and other styles
17

Pickering, Martin J., and Simon Garrod. "An integrated theory of language production and comprehension." Behavioral and Brain Sciences 36, no. 4 (June 24, 2013): 329–47. http://dx.doi.org/10.1017/s0140525x12001495.

Full text
Abstract:
AbstractCurrently, production and comprehension are regarded as quite distinct in accounts of language processing. In rejecting this dichotomy, we instead assert that producing and understanding are interwoven, and that this interweaving is what enables people to predict themselves and each other. We start by noting that production and comprehension are forms of action and action perception. We then consider the evidence for interweaving in action, action perception, and joint action, and explain such evidence in terms of prediction. Specifically, we assume that actors construct forward models of their actions before they execute those actions, and that perceivers of others' actions covertly imitate those actions, then construct forward models of those actions. We use these accounts of action, action perception, and joint action to develop accounts of production, comprehension, and interactive language. Importantly, they incorporate well-defined levels of linguistic representation (such as semantics, syntax, and phonology). We show (a) how speakers and comprehenders use covert imitation and forward modeling to make predictions at these levels of representation, (b) how they interweave production and comprehension processes, and (c) how they use these predictions to monitor the upcoming utterances. We show how these accounts explain a range of behavioral and neuroscientific data on language processing and discuss some of the implications of our proposal.
APA, Harvard, Vancouver, ISO, and other styles
18

Hasue, Fumio, Tomoyuki Kuwaki, Hiroaki Yamada, Yasuichiro Fukuda, and Megumi Shimoyama. "Inhibitory Actions of Endothelin-1 on Pain Processing." Journal of Cardiovascular Pharmacology 44, Supplement 1 (November 2004): S318—S320. http://dx.doi.org/10.1097/01.fjc.0000166271.40044.0c.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Auclair-Ouellet, Noémie, Marion Fossard, Joël Macoir, and Robert Laforce. "The Nonverbal Processing of Actions Is an Area of Relative Strength in the Semantic Variant of Primary Progressive Aphasia." Journal of Speech, Language, and Hearing Research 63, no. 2 (February 26, 2020): 569–84. http://dx.doi.org/10.1044/2019_jslhr-19-00271.

Full text
Abstract:
Purpose Better performance for actions compared to objects has been reported in the semantic variant of primary progressive aphasia (svPPA). This study investigated the influence of the assessment task (naming, semantic picture matching) over the dissociation between objects and actions. Method Ten individuals with svPPA and 17 matched controls completed object and action naming tests, and object and action semantic picture matching tests. Performance was compared between the svPPA and control groups, within the svPPA group, and for each participant with svPPA versus the control group individually. Results Compared to controls, participants with svPPA were impaired on object and action naming, and object and action semantic picture matching. As a group, participants with svPPA had an advantage for actions over objects and for semantic picture matching tests over naming tests. Eight participants had a better performance for actions compared to objects in naming, with three showing a significant difference. Nine participants had a better performance for actions compared to objects in semantic picture matching, with six showing a significant difference. For objects, semantic picture matching was better than naming in nine participants, with five showing a significant difference. For actions, semantic picture matching was better than naming in all 10 participants, with nine showing a significant difference. Conclusion The nonverbal processing of actions, as assessed with a semantic picture matching test, is an area of relative strength in svPPA. Clinical implications for assessment planning and interpretation and theoretical implications for current models of semantic cognition are discussed.
APA, Harvard, Vancouver, ISO, and other styles
20

Mele, Sonia, Alan D. A. Mattiassi, and Cosimo Urgesi. "Unconscious processing of body actions primes subsequent action perception but not motor execution." Journal of Experimental Psychology: Human Perception and Performance 40, no. 5 (October 2014): 1940–62. http://dx.doi.org/10.1037/a0036215.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Venter, Elmarie. "How and why actions are selected: action selection and the dark room problem." Kairos. Journal of Philosophy & Science 15, no. 1 (April 1, 2016): 19–45. http://dx.doi.org/10.1515/kjps-2016-0002.

Full text
Abstract:
Abstract In this paper, I examine an evolutionary approach to the action selection problem and illustrate how it helps raise an objection to the predictive processing account. Clark examines the predictive processing account as a theory of brain function that aims to unify perception, action, and cognition, but - despite this aim - fails to consider action selection overtly. He off ers an account of action control with the implication that minimizing prediction error is an imperative of living organisms because, according to the predictive processing account, action is employed to fulfill expectations and reduce prediction error. One way in which this can be achieved is by seeking out the least stimulating environment and staying there (Friston et al. 2012: 2). Bayesian, neuroscientific, and machine learning approaches into a single framework whose overarching principle is the minimization of surprise (or, equivalently, the maximization of expectation. But, most living organisms do not find, and stay in, surprise free environments. This paper explores this objection, also called the “dark room problem”, and examines Clark’s response to the problem. Finally, I recommend that if supplemented with an account of action selection, Clark’s account will avoid the dark room problem.
APA, Harvard, Vancouver, ISO, and other styles
22

Di Costa, Steven, Héloïse Théro, Valérian Chambon, and Patrick Haggard. "Try and try again: Post-error boost of an implicit measure of agency." Quarterly Journal of Experimental Psychology 71, no. 7 (January 1, 2018): 1584–95. http://dx.doi.org/10.1080/17470218.2017.1350871.

Full text
Abstract:
The sense of agency refers to the feeling that we control our actions and, through them, effects in the outside world. Reinforcement learning provides an important theoretical framework for understanding why people choose to make particular actions. Few previous studies have considered how reinforcement and learning might influence the subjective experience of agency over actions and outcomes. In two experiments, participants chose between two action alternatives, which differed in reward probability. Occasional reversals of action–reward mapping required participants to monitor outcomes and adjust action selection processing accordingly. We measured shifts in the perceived times of actions and subsequent outcomes (‘intentional binding’) as an implicit proxy for sense of agency. In the first experiment, negative outcomes showed stronger binding towards the preceding action, compared to positive outcomes. Furthermore, negative outcomes were followed by increased binding of actions towards their outcome on the following trial. Experiment 2 replicated this post-error boost in action binding and showed that it only occurred when people could learn from their errors to improve action choices. We modelled the post-error boost using an established quantitative model of reinforcement learning. The post-error boost in action binding correlated positively with participants’ tendency to learn more from negative outcomes than from positive outcomes. Our results suggest a novel relation between sense of agency and reinforcement learning, in which sense of agency is increased when negative outcomes trigger adaptive changes in subsequent action selection processing.
APA, Harvard, Vancouver, ISO, and other styles
23

Treille, Avril, Coriandre Vilain, Thomas Hueber, Laurent Lamalle, and Marc Sato. "Inside Speech: Multisensory and Modality-specific Processing of Tongue and Lip Speech Actions." Journal of Cognitive Neuroscience 29, no. 3 (March 2017): 448–66. http://dx.doi.org/10.1162/jocn_a_01057.

Full text
Abstract:
Action recognition has been found to rely not only on sensory brain areas but also partly on the observer's motor system. However, whether distinct auditory and visual experiences of an action modulate sensorimotor activity remains largely unknown. In the present sparse sampling fMRI study, we determined to which extent sensory and motor representations interact during the perception of tongue and lip speech actions. Tongue and lip speech actions were selected because tongue movements of our interlocutor are accessible via their impact on speech acoustics but not visible because of its position inside the vocal tract, whereas lip movements are both “audible” and visible. Participants were presented with auditory, visual, and audiovisual speech actions, with the visual inputs related to either a sagittal view of the tongue movements or a facial view of the lip movements of a speaker, previously recorded by an ultrasound imaging system and a video camera. Although the neural networks involved in visual visuolingual and visuofacial perception largely overlapped, stronger motor and somatosensory activations were observed during visuolingual perception. In contrast, stronger activity was found in auditory and visual cortices during visuofacial perception. Complementing these findings, activity in the left premotor cortex and in visual brain areas was found to correlate with visual recognition scores observed for visuolingual and visuofacial speech stimuli, respectively, whereas visual activity correlated with RTs for both stimuli. These results suggest that unimodal and multimodal processing of lip and tongue speech actions rely on common sensorimotor brain areas. They also suggest that visual processing of audible but not visible movements induces motor and visual mental simulation of the perceived actions to facilitate recognition and/or to learn the association between auditory and visual signals.
APA, Harvard, Vancouver, ISO, and other styles
24

Andres, Michael, Etienne Olivier, and Arnaud Badets. "Actions, Words, and Numbers." Current Directions in Psychological Science 17, no. 5 (October 2008): 313–17. http://dx.doi.org/10.1111/j.1467-8721.2008.00597.x.

Full text
Abstract:
Recent findings in neuroscience challenge the view that the motor system is exclusively dedicated to the control of actions, and it has been suggested that it may contribute critically to conceptual processes such as those involved in language and number representation. The aim of this review is to address this issue by illustrating some interactions between the motor system and the processing of words and numbers. First, we detail functional brain imaging studies suggesting that motor circuits may be recruited to represent the meaning of action-related words. Second, we summarize a series of experiments demonstrating some interference between the size of grip used to grasp objects and the magnitude processing of words or numbers. Third, we report data suggestive of a common representation of numbers and finger movements in the adult brain, a possible trace of the finger-counting strategies used in childhood. Altogether, these studies indicate that the motor system interacts with several aspects of word and number representations. Future research should determine whether these findings reflect a causal role of the motor system in the organization of semantic knowledge.
APA, Harvard, Vancouver, ISO, and other styles
25

Kralev, Velin, Radoslava Kraleva, and Petia Koprinkova-Hristova. "Data modelling and data processing generated by human eye movements." International Journal of Electrical and Computer Engineering (IJECE) 11, no. 5 (October 1, 2021): 4345. http://dx.doi.org/10.11591/ijece.v11i5.pp4345-4352.

Full text
Abstract:
Data modeling and data processing are important activities in any scientific research. This research focuses on the modeling of data and processing of data generated by a saccadometer. The approach used is based on the relational data model, but the processing and storage of the data is done with client datasets. The experiments were performed with 26 randomly selected files from a total of 264 experimental sessions. The data from each experimental session was stored in three different formats, respectively text, binary and extensible markup language (XML) based. The results showed that the text format and the binary format were the most compact. Several actions related to data processing were analyzed. Based on the results obtained, it was found that the two fastest actions are respectively loading data from a binary file and storing data into a binary file. In contrast, the two slowest actions were storing the data in XML format and loading the data from a text file, respectively. Also, one of the time-consuming operations turned out to be the conversion of data from text format to binary format. Moreover, the time required to perform this action does not depend in proportion on the number of records processed.
APA, Harvard, Vancouver, ISO, and other styles
26

Chiavarino, Claudia, Ian A. Apperly, and Glyn W. Humphreys. "Frontal and parietal lobe involvement in the processing of pretence and intention." Quarterly Journal of Experimental Psychology 62, no. 9 (September 2009): 1738–56. http://dx.doi.org/10.1080/17470210802633313.

Full text
Abstract:
We assessed whether different processes might be at play during pretence understanding by examining breakdowns of performance in participants with acquired brain damage. In Experiment 1 patients with frontal or parietal lesions and neurologically intact adults were asked to categorize videos of pretend and real actions. In Experiment 2 participants saw three types of videos: real intentional actions, real accidental actions, and pretend actions. In one session they judged whether the actions they saw were intentional or accidental, and in a second session they judged whether the actions were real or pretend. Parietal patients had particular difficulties in the identification of pretend actions, and both parietal and frontal patients were more impaired than controls in understanding the intentional nature of pretence. Analyses of individual patients’ performance revealed that parietal lesions, and in particular lesions to the temporo-parietal junction, impaired the ability to discriminate pretend from real actions. However, this did not necessarily affect the discrimination of intentional from unintentional actions, which instead may be independently disrupted by damage to frontal areas. Moreover, spared ability to discriminate pretend actions from real actions, and intentional actions from accidental actions, did not grant a full conceptual understanding of the intentional nature of pretence. The implications for pretence understanding are discussed.
APA, Harvard, Vancouver, ISO, and other styles
27

Ichikawa, Tetsuo, Junji Komoda, Masanobu Horiuchi, Hiroyasu Ichiba, Masaru Hada, and Naoyuki Matsumoto. "Observation of oral actions using digital image processing system." Nihon Hotetsu Shika Gakkai Zasshi 34, no. 2 (1990): 396–401. http://dx.doi.org/10.2186/jjps.34.396.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Martin, Drew, and Arch G. Woodside. "Tourists' dual‐processing accounts of reasoning, judgment, and actions." International Journal of Culture, Tourism and Hospitality Research 5, no. 2 (June 7, 2011): 195–212. http://dx.doi.org/10.1108/17506181111139609.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Dickinson, CJ, C. Seva, and T. Yamada. "Gastrin Processing: From Biochemical Obscurity to Unique Physiological Actions." Physiology 12, no. 1 (February 1, 1997): 9–15. http://dx.doi.org/10.1152/physiologyonline.1997.12.1.9.

Full text
Abstract:
Posttranslational processing is essential for the biological activation of many peptide hormones. Only fully processed and amidated gastrin, a peptide secreted by the stomach, stimulates acid secretion. However, both amidated gastrin and its glycine-extended precursor stimulate cellular proliferation through selective receptors, suggesting that posttranslational processing is critical to gastrointestinal physiology.
APA, Harvard, Vancouver, ISO, and other styles
30

Knopf, Monika, Uta Kraus, and Regina A. Kressley-Mba. "Relational information processing of novel unrelated actions by infants." Infant Behavior and Development 29, no. 1 (January 2006): 44–53. http://dx.doi.org/10.1016/j.infbeh.2005.07.005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Jammernegg, Werner, and Peter Kischka. "Information processing in a three-actions dynamic decision model." European Journal of Operational Research 62, no. 3 (November 1992): 282–93. http://dx.doi.org/10.1016/0377-2217(92)90118-s.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Ganier, Franck. "Processing text and pictures in procedural instructions." Theme: Pictograms 10, no. 2 (December 31, 2001): 146–53. http://dx.doi.org/10.1075/idj.10.2.12gan.

Full text
Abstract:
Background. Following procedural instructions normally requires the learner to interpret written information before carrying out any action.This interpretation entails transforming pictorial and/or linguistic information into a series of actions. Current psychological models propose that these two kinds of information are not processed in the same way,and that pictures lead more directly to the construction of a mental representation than does text. If this is so, then giving pictorial instructions to carry out an action seems more appropriate than giving text.However, processing instructions sometimes fails, even with picture formats. One approach to studying why this kind of communication fails is to investigate how textual and pictorial information is processed.
APA, Harvard, Vancouver, ISO, and other styles
33

Bakker, Marta, Jessica A. Sommerville, and Gustaf Gredebäck. "Enhanced Neural Processing of Goal-directed Actions After Active Training in 4-Month-Old Infants." Journal of Cognitive Neuroscience 28, no. 3 (March 2016): 472–82. http://dx.doi.org/10.1162/jocn_a_00909.

Full text
Abstract:
The current study explores the neural correlates of action perception and its relation to infants' active experience performing goal-directed actions. Study 1 provided active training with sticky mittens that enables grasping and object manipulation in prereaching 4-month-olds. After training, EEG was recorded while infants observed images of hands grasping toward (congruent) or away from (incongruent) objects. We demonstrate that brief active training facilitates social perception as indexed by larger amplitude of the P400 ERP component to congruent compared with incongruent trials. Study 2 presented 4-month-old infants with passive training in which they observed an experimenter perform goal-directed reaching actions, followed by an identical ERP session to that used in Study 1. The second study did not demonstrate any differentiation between congruent and incongruent trials. These results suggest that (1) active experience alters the brains' response to goal-directed actions performed by others and (2) visual exposure alone is not sufficient in developing the neural networks subserving goal processing during action observation in infancy.
APA, Harvard, Vancouver, ISO, and other styles
34

van Rooij, Iris, Willem Haselager, and Harold Bekkering. "Goals are not implied by actions, but inferred from actions and contexts." Behavioral and Brain Sciences 31, no. 1 (February 2008): 38–39. http://dx.doi.org/10.1017/s0140525x07003305.

Full text
Abstract:
AbstractPeople cannot understand intentions behind observed actions by direct simulation, because goal inference is highly context dependent. Context dependency is a major source of computational intractability in traditional information-processing models. An embodied embedded view of cognition may be able to overcome this problem, but then the problem needs recognition and explication within the context of the new, layered cognitive architecture.
APA, Harvard, Vancouver, ISO, and other styles
35

Zhang, Huixia, Guowei Shen, Chun Guo, Yunhe Cui, and Chaohui Jiang. "EX-Action: Automatically Extracting Threat Actions from Cyber Threat Intelligence Report Based on Multimodal Learning." Security and Communication Networks 2021 (May 27, 2021): 1–12. http://dx.doi.org/10.1155/2021/5586335.

Full text
Abstract:
With the increasing complexity of network attacks, an active defense based on intelligence sharing becomes crucial. There is an important issue in intelligence analysis that automatically extracts threat actions from cyber threat intelligence (CTI) reports. To address this problem, we propose EX-Action, a framework for extracting threat actions from CTI reports. EX-Action finds threat actions by employing the natural language processing (NLP) technology and identifies actions by a multimodal learning algorithm. At the same time, a metric is used to evaluate the information completeness of the extracted action obtained by EX-Action. By the experiment on the CTI reports that consisted of sentences with complex structure, the experimental result indicates that EX-Action can achieve better performance than two state-of-the-art action extraction methods in terms of accuracy, recall, precision, and F1-score.
APA, Harvard, Vancouver, ISO, and other styles
36

Spencer, Rachel Ann, Simon Edward Frank Spencer, Sarah Rodgers, Stephen M. Campbell, and Anthony John Avery. "Processing of discharge summaries in general practice: a retrospective record review." British Journal of General Practice 68, no. 673 (June 18, 2018): e576-e585. http://dx.doi.org/10.3399/bjgp18x697877.

Full text
Abstract:
BackgroundThere is a need for greater understanding of the epidemiology of primary care patient safety in order to generate solutions to prevent future harm.AimTo estimate the rate of failures in processing actions requested in hospital discharge summaries, and to determine factors associated with these failures.Design and settingThe authors undertook a retrospective records review. The study population was emergency admissions for patients aged ≥75 years, drawn from 10 practices in three areas of England.MethodOne GP researcher reviewed the records for 300 patients after hospital discharge to determine the rate of compliance with actions requested in the discharge summary, and to estimate the rate of associated harm from non-compliance. In cases where GPs documented decision-making contrary to what was requested, these instances did not constitute failures. Data were also collected on time taken to process discharge communications.ResultsThere were failures in processing actions requested in 46% (112/246) of discharge summaries (95% confidence interval [CI] = 39 to 52%). Medications changes were not made in 17% (124/750) of requests (95% CI = 14 to 19%). Tests were not completed for 26% of requests (95% CI = 16 to 35%), and 27% of requested follow-ups were not arranged (95% CI = 20 to 33%). The harm rate associated with these failures was 8%. Increased risk of failure to process test requests was significantly associated with the type of clinical IT system, and male patients.ConclusionFailures occurred in the processing of requested actions in almost half of all discharge summaries, and with all types of action requested. Associated harms were uncommon and most were of moderate severity.
APA, Harvard, Vancouver, ISO, and other styles
37

Amoruso, Lucia, Alessandra Finisguerra, and Cosimo Urgesi. "Spatial frequency tuning of motor responses reveals differential contribution of dorsal and ventral systems to action comprehension." Proceedings of the National Academy of Sciences 117, no. 23 (May 26, 2020): 13151–61. http://dx.doi.org/10.1073/pnas.1921512117.

Full text
Abstract:
Understanding object-directed actions performed by others is central to everyday life. This ability is thought to rely on the interaction between the dorsal action observation network (AON) and a ventral object recognition pathway. On this view, the AON would encode action kinematics, and the ventral pathway, the most likely intention afforded by the objects. However, experimental evidence supporting this model is still scarce. Here, we aimed to disentangle the contribution of dorsal vs. ventral pathways to action comprehension by exploiting their differential tuning to low-spatial frequencies (LSFs) and high-spatial frequencies (HSFs). We filtered naturalistic action images to contain only LSF or HSF and measured behavioral performance and corticospinal excitability (CSE) using transcranial magnetic stimulation (TMS). Actions were embedded in congruent or incongruent scenarios as defined by the compatibility between grips and intentions afforded by the contextual objects. Behaviorally, participants were better at discriminating congruent actions in intact than LSF images. This effect was reversed for incongruent actions, with better performance for LSF than intact and HSF. These modulations were mirrored at the neurophysiological level, with greater CSE facilitation for congruent than incongruent actions for HSF and the opposite pattern for LSF images. Finally, only for LSF did we observe CSE modulations according to grip kinematics. While results point to differential dorsal (LSF) and ventral (HSF) contributions to action comprehension for grip and context encoding, respectively, the negative congruency effect for LSF images suggests that object processing may influence action perception not only through ventral-to-dorsal connections, but also through a dorsal-to-dorsal route involved in predictive processing.
APA, Harvard, Vancouver, ISO, and other styles
38

Topoleanu, Tudor, Gheorghe Leonte Mogan, and Cristian Postelnicu. "On Semantic Graph Language Processing for Mobile Robot Voice Interaction." Applied Mechanics and Materials 162 (March 2012): 286–93. http://dx.doi.org/10.4028/www.scientific.net/amm.162.286.

Full text
Abstract:
This paper describes a simple semantic graph based model for processing natural language commands issued to a mobile robot. The proposed model is intended for translating natural language commands given by naïve users into an action or sequence of actions that the robot can execute via its available functionality, in order to complete the commands. This approach to language processing is easily extensible through automated learning, it also is simpler and more scalable than hard-coded command to action mapping, while also being flexible and covering any number of command formulations that could be generated by a user.
APA, Harvard, Vancouver, ISO, and other styles
39

Li, Ling Li, Zhao Gang, Han Fen Gu, and Yang Zhao. "Research on Modeling and Realization of Processing Action for Cloud Manufacturing Mode." Key Engineering Materials 486 (July 2011): 111–14. http://dx.doi.org/10.4028/www.scientific.net/kem.486.111.

Full text
Abstract:
The modelling and realization of processing actions are investigated based on the cloud manufacturing mode. It can facilitate the manufacturing services for process engineers. According to the minimum machining cost principle and the shortest process time principle, the realization of processing actions is carried out. The architecture of the reasoning of process behaviors is constructed, and then the subsystem for supporting processing actions is developed. The auxiliary subsystem is available as an accessory of the machine tool.
APA, Harvard, Vancouver, ISO, and other styles
40

Joshila Grace, L. K., K. Rahul, and P. S. Sidharth. "An Efficient Action Detection Model Using Deep Belief Networks." Journal of Computational and Theoretical Nanoscience 16, no. 8 (August 1, 2019): 3232–36. http://dx.doi.org/10.1166/jctn.2019.8168.

Full text
Abstract:
Computer Vision and image processing have gained an enormous advance in the field of machine learning techniques. Some of the major research areas within machine learning are Action detection and Pattern Recognition. Action recognition is a new advancement of pattern recognition approaches where the actions performed by any action or living being is tracked and monitored. Action recognition still encounters some challenges that needs to be looked upon and perform recognize the actions is a very minimal time. Networks like SVM and Neural Networks are used to train the network in such a way they are able to detect a pattern of an action when a new frame is given. In this paper, we have proposed a model which detects patterns of actions from a video or an image. Bounding boxes are used to detect the actions and localize it. Deep Belief Network is used to train the model where numerous images having actions are given as the training set. The performance evaluation was done on the model and it is observed that it detects the actions very accurately when a new image is given to the network.
APA, Harvard, Vancouver, ISO, and other styles
41

Barraclough*, Nick E., Dengke Xiao*, Chris I. Baker, Mike W. Oram, and David I. Perrett. "Integration of Visual and Auditory Information by Superior Temporal Sulcus Neurons Responsive to the Sight of Actions." Journal of Cognitive Neuroscience 17, no. 3 (March 2005): 377–91. http://dx.doi.org/10.1162/0898929053279586.

Full text
Abstract:
Processing of complex visual stimuli comprising facial movements, hand actions, and body movements is known to occur in the superior temporal sulcus (STS) of humans and nonhuman primates. The STS is also thought to play a role in the integration of multimodal sensory input. We investigated whether STS neurons coding the sight of actions also integrated the sound of those actions. For 23% of neurons responsive to the sight of an action, the sound of that action significantly modulated the visual response. The sound of the action increased or decreased the visually evoked response for an equal number of neurons. In the neurons whose visual response was increased by the addition of sound (but not those neurons whose responses were decreased), the audiovisual integration was dependent upon the sound of the action matching the sight of the action. These results suggest that neurons in the STS form multisensory representations of observed actions.
APA, Harvard, Vancouver, ISO, and other styles
42

Tettamanti, Marco, Giovanni Buccino, Maria Cristina Saccuman, Vittorio Gallese, Massimo Danna, Paola Scifo, Ferruccio Fazio, Giacomo Rizzolatti, Stefano F. Cappa, and Daniela Perani. "Listening to Action-related Sentences Activates Fronto-parietal Motor Circuits." Journal of Cognitive Neuroscience 17, no. 2 (February 2005): 273–81. http://dx.doi.org/10.1162/0898929053124965.

Full text
Abstract:
Observing actions made by others activates the cortical circuits responsible for the planning and execution of those same actions. This observation–execution matching system (mirror-neuron system) is thought to play an important role in the understanding of actions made by others. In an fMRI experiment, we tested whether this system also becomes active during the processing of action-related sentences. Participants listened to sentences describing actions performed with the mouth, the hand, or the leg. Abstract sentences of comparable syntactic structure were used as control stimuli. The results showed that listening to action-related sentences activates a left fronto-parieto-temporal network that includes the pars opercularis of the inferior frontal gyrus (Broca's area), those sectors of the premotor cortex where the actions described are motorically coded, as well as the inferior parietal lobule, the intraparietal sulcus, and the posterior middle temporal gyrus. These data provide the first direct evidence that listening to sentences that describe actions engages the visuomotor circuits which subserve action execution and observation.
APA, Harvard, Vancouver, ISO, and other styles
43

Hooijdonk, Charlotte van, Fons Maes, and Nicole Ummelen. "'I have been here before'." Text features which enable cognitive strategies during text comprehension 14, no. 1 (April 27, 2006): 8–21. http://dx.doi.org/10.1075/idj.14.1.03hoo.

Full text
Abstract:
We conducted an explorative study to investigate whether hypertext users use spatial expressions to conceptualize cognitive actions they are involved in, and how these expressions relate to the type of actions (executions versus evaluations) and the level of actions (syntactic vs. semantic vs. pragmatic). As a method, we used ten thinking aloud protocols of hypertext users who were navigating a website. The results of the protocol analysis indicate that spatial expressions were most frequent when users describe executions on the syntactic action level. The exploration allows us to critically assess the value of the thinking aloud method to shed light on the cognitive actions and processing involved in using hypertext.
APA, Harvard, Vancouver, ISO, and other styles
44

Assmus, Ann, Carsten Giessing, Peter H. Weiss, and Gereon R. Fink. "Functional Interactions during the Retrieval of Conceptual Action Knowledge: An fMRI Study." Journal of Cognitive Neuroscience 19, no. 6 (June 2007): 1004–12. http://dx.doi.org/10.1162/jocn.2007.19.6.1004.

Full text
Abstract:
Impaired retrieval of conceptual knowledge for actions has been associated with lesions of left premotor, left parietal, and left middle temporal areas [Tranel, D., Kemmerer, D., Adolphs, R., Damasio, H., & Damasio, A. R. Neural correlates of conceptual knowledge for actions. Cognitive Neuropsychology, 409–432, 2003]. Here we aimed at characterizing the differential contribution of these areas to the retrieval of conceptual knowledge about actions. During functional magnetic resonance imaging (fMRI), different categories of pictograms (whole-body actions, manipulable and nonmanipulable objects) were presented to healthy subjects. fMRI data were analyzed using SPM2. A conjunction analysis of the neural activations elicited by all pictograms revealed ( p < .05, corrected) a bilateral inferior occipito-temporal neural network with strong activations in the right and left fusiform gyri. Action pictograms contrasted to object pictograms showed differential activation of area MT+, the inferior and superior parietal cortex, and the premotor cortex bilaterally. An analysis of psychophysiological interactions identified contribution-dependent changes in the neural responses when pictograms triggered the retrieval of conceptual action knowledge: Processing of action pictograms specifically enhanced the neural interaction between the right and left fusiform gyri, the right and left middle temporal cortices (MT+), and the left superior and inferior parietal cortex. These results complement and extend previous neuropsychological and neuroimaging studies by showing that knowledge about action concepts results from an increased coupling between areas concerned with semantic processing (fusiform gyrus), movement perception (MT+), and temporospatial movement control (left parietal cortex).
APA, Harvard, Vancouver, ISO, and other styles
45

Meister, Ingo G., and Marco Iacoboni. "No Language-Specific Activation during Linguistic Processing of Observed Actions." PLoS ONE 2, no. 9 (September 12, 2007): e891. http://dx.doi.org/10.1371/journal.pone.0000891.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Vicary, Staci A., and Catherine J. Stevens. "Posture-based processing in visual short-term memory for actions." Quarterly Journal of Experimental Psychology 67, no. 12 (December 2014): 2409–24. http://dx.doi.org/10.1080/17470218.2014.931445.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Abdollahi, Rouhollah O., Jan Jastorff, and Guy A. Orban. "Common and Segregated Processing of Observed Actions in Human SPL." Cerebral Cortex 23, no. 11 (August 23, 2012): 2734–53. http://dx.doi.org/10.1093/cercor/bhs264.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Janczyk, Markus, Volker H. Franz, and Wilfried Kunde. "Grasping for parsimony: Do some motor actions escape dorsal processing?" Neuropsychologia 48, no. 12 (October 2010): 3405–15. http://dx.doi.org/10.1016/j.neuropsychologia.2010.06.034.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Xu, Shi. "Internalization, Trafficking, Intracellular Processing and Actions of Antibody-Drug Conjugates." Pharmaceutical Research 32, no. 11 (June 25, 2015): 3577–83. http://dx.doi.org/10.1007/s11095-015-1729-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Gerdes, Karen E., and Elizabeth A. Segal. "A Social Work Model of Empathy." Advances in Social Work 10, no. 2 (December 15, 2009): 114–27. http://dx.doi.org/10.18060/235.

Full text
Abstract:
This article presents a social work model of empathy that reflects the latest interdisciplinary research findings on empathy. The model reflects the social work commitment to social justice. The three model components are: 1) the affective response to another’s emotions and actions; 2) the cognitive processing of one’s affective response and the other person’s perspective; and 3) the conscious decision-making to take empathic action. Mirrored affective responses are involuntary, while cognitive processing and conscious decision-making are voluntary. The affective component requires healthy, neural pathways to function appropriately and accurately. The cognitive aspects of perspective-taking, self-awareness, and emotion regulation can be practiced and cultivated, particularly through the use of mindfulness techniques. Empathic action requires that we move beyond affective responses and cognitive processing toward utilizing social work values and knowledge to inform our actions. By introducing the proposed model of empathy, we hope it will serve as a catalyst for discussion and future research and development of the model. Key Words: Empathy, Social Empathy, Social Cognitive Neuroscience
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography