Letteratura scientifica selezionata sul tema "Disfluence verbale"

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Consulta la lista di attuali articoli, libri, tesi, atti di convegni e altre fonti scientifiche attinenti al tema "Disfluence verbale".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Articoli di riviste sul tema "Disfluence verbale":

1

Engelhardt, Paul E., Mhairi EG McMullon e Martin Corley. "Individual differences in the production of disfluency: A latent variable analysis of memory ability and verbal intelligence". Quarterly Journal of Experimental Psychology 72, n. 5 (5 giugno 2018): 1084–101. http://dx.doi.org/10.1177/1747021818778752.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Recent work has begun to focus on the role that individual differences in executive function and intelligence have on the production of fluent speech. However, isolating the underlying causes of different types of disfluency has been difficult given the speed and complexity of language production. In this study, we focused on the role of memory abilities and verbal intelligence, and we chose a task that relied heavily on memory for successful performance. Given the task demands, we hypothesised that a substantial proportion of disfluencies would be due to memory retrieval problems. We contrasted memory abilities with individual differences in verbal intelligence as previous work highlighted verbal intelligence as an important factor in disfluency production. A total of 78 participants memorised and repeated 40 syntactically complex sentences, which were recorded and coded for disfluencies. Model comparisons were carried out using hierarchical structural equation modelling. Results showed that repetitions were significantly related to verbal intelligence. Unfilled pauses and repairs, in contrast, were marginally ( p < .09) related to memory abilities. The relationship in all cases was negative. Conclusions explore the link between different types of disfluency and particular problems arising in the course of production, and how individual differences inform theoretical debates in language production.
2

Liles, Betty Z., Jay Lerman, Lisa Christensen e Joy St. Ledger. "A Case Description of Verbal and Signed Disfluencies of a 10-Year-Old Boy Who Is Retarded". Language, Speech, and Hearing Services in Schools 23, n. 2 (aprile 1992): 107–12. http://dx.doi.org/10.1044/0161-1461.2302.107.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Disfluencies in the verbal and signed language of a 10-year-old moderately mentally retarded boy were analyzed from extensive video samples of spontaneous communication and structured language lessons. The subject had normal hearing with speech and language commensurate to his mental age. The subject was observed to be disfluent in verbal communication and in verbal and manual communication produced simultaneously. Repetitions, prolongations, and blockages were described as predominately synchronous across communicative modes during the use of total communication (i.e., simultaneous verbal and sign). Discussion addresses implications for the accurate characterization of stuttering in manual communication and the appropriate approaches to management.
3

Meyers, Susan C., e Frances J. Freeman. "Interruptions as a Variable in Stuttering and Disfluency". Journal of Speech, Language, and Hearing Research 28, n. 3 (settembre 1985): 428–35. http://dx.doi.org/10.1044/jshr.2803.435.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Parental verbal behavior is often cited as a major precipitating and maintaining factor in the onset and development of stuttering. Parents are frequently counseled to avoid interrupting their stuttering child. The purpose of the present study was to determine (a) whether mothers of preschool stutterers interrupt children's speech more frequently than mothers of nonstutterers, (b) whether stutterers interrupt the speech of mothers more frequently than nonstutterers, and (c) whether there is relationship between interruptive behavior and the occurrence of children's disfluencies. Twenty-four preschool boys (12 stutterers and 12 nonstutterers) and their mothers participated in the study. Ten-min, conversational speech samples of mothers interacting with their own children, unfamiliar stutterers, and unfamiliar nonstutterers were analyzed. Results indicated that mothers of nonstutterers interrupted the disfluent speech of stutterers significantly more often than did mothers of stutterers. Most importantly, all mothers interrupted children's disfluent speech significantly more than they interrupted children's fluent speech. Further, all children demonstrated a tendency to be disfluent when they interrupted a mother.
4

Pudlinski, Christopher, e Rachel S. Y. Chen. "Destigmatizing disfluency". Journal of Interactional Research in Communication Disorders 14, n. 2 (26 maggio 2023): 220–40. http://dx.doi.org/10.1558/jircd.24376.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Background: Typically understood as a symptom of a speech disorder, stuttering is the verbal repetition of sounds, words, or phrases that suspend the progression of a speaker’s turn. Method: Using conversation analysis, over 180 phrasal multisyllabic stutters were found in audio recordings of peer telephone support in the United States. Results: Most phrasal stutters arise from early, within-turn indicators of potential sequential, semantic, or syntactic trouble. Typically produced with quick pacing, the stutters are varied, including the latching of sounds across words, abbreviated words, word blends, and/or unintelligible sounds. Elongated or cut-off sounds often indicate the seeming end of a stutter, with either abandonment or a typically fluent completion of a current turn occurring upon a stutter’s conclusion. Importantly, the other interactant never interrupts or completes the stutter. Discussion/conclusion: These findings contradict prior conversation analytic studies of stutters and describe stuttering as a normalized everyday action, where speakers can successfully navigate disfluency to reach eventual fluency.
5

Park, Yeong Hye, Kyungjae Lee e Seong Hee Choi. "Effects of Nanta Activities on the Adults Who Stutter with Intellectual Disabilities". Audiology and Speech Research 17, n. 3 (31 luglio 2021): 314–21. http://dx.doi.org/10.21848/asr.210009.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Adults who stutter (AWS) may have difficulty in coordination of speech-related muscles movement. Therefore enhancement of coordiantion of speech-related muscles may result in a decrease in disfluency. The current study is a case report of two AWS with intellectual disabilities who received Nanta treatment, non-verbal music therapy technique focusing on coordination of muscles. Two AWS with intellectual disabilities received Nanta treatment for 15 sessions. The Nanta treatment is made up of two goals; body movement to rhythm and speech-related movement to rhythm. Disfluency frequencies for conversation samples were measured pre-treatment, every second treatment, and post-treatment. In addition, a communication test was conducted pre- and post-treatment. Both participants showed a decrease in disfluency frequencies. Moreover there were individualized differences in the changing pattern. However, the participants did not show a positive change in communication attitude. The Nanta treatment may have been effective in reducing disfluency frequencies for the participants of the current study, especially because the treatment was based on non-verbal techniques. These results emphasize that stuttering treatment should be individualized according to the client's characteristics.
6

Arongna, Naomi Sakai, Keiichi Yasu e Koichi Mori. "Disfluencies and Strategies Used by People Who Stutter During a Working Memory Task". Journal of Speech, Language, and Hearing Research 63, n. 3 (23 marzo 2020): 688–701. http://dx.doi.org/10.1044/2019_jslhr-19-00393.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Purpose Working memory (WM) deficits are implicated in various communication disorders, including stuttering. The reading span test (RST) measures WM capacity with the dual task of reading sentences aloud and remembering target words. This study demonstrates a difference in strategy between people who stutter (PWS) and people who do not stutter (PWNS) in performing the RST. The impact of the effective strategy and the stuttering-like disfluencies during the RST were investigated. Method Twenty-six PWS and 24 people who do not stutter performed the RST and a simple reading aloud task. After the RST, they were asked which strategy (“imagery” or “rehearsal”) they had used in order to remember the target words during the task. Results The proportion of those who used an “imagery” strategy during the RST was significantly smaller in the PWS group. However, the RST scores of those who used an “imagery” strategy were significantly higher than the RST scores of those who used a “rehearsal” strategy in both groups. The “rehearsal” users were asked to undertake one more RST with an “imagery” strategy, which resulted in an increased score for both groups. The disfluency frequency of the PWS group was significantly reduced during the RST than during the oral reading task, irrespective of the employed strategy. Conclusions PWS tended to use the less effective verbal “rehearsal” strategy during the RST. The differential effects of switching strategies on the measured WM capacity and on the disfluency rate suggest that the enhanced fluency during the RST would be mostly attributable to the reduced attention to speech motor control. Therefore, the use of the “imagery” strategy and focusing on the contents of communication, away from speech motor control, should help PWS communicate better in daily conversation.
7

Plexico, Laura W., Julie E. Cleary, Ashlynn McAlpine e Allison M. Plumb. "Disfluency Characteristics Observed in Young Children With Autism Spectrum Disorders: A Preliminary Report". Perspectives on Fluency and Fluency Disorders 20, n. 2 (agosto 2010): 42–50. http://dx.doi.org/10.1044/ffd20.2.42.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
This descriptive study evaluates the speech disfluencies of 8 verbal children between 3 and 5 years of age with autism spectrum disorders (ASD). Speech samples were collected for each child during standardized interactions. Percentage and types of disfluencies observed during speech samples are discussed. Although they did not have a clinical diagnosis of stuttering, all of the young children with ASD in this study produced disfluencies. In addition to stuttering-like disfluencies and other typical disfluencies, the children with ASD also produced atypical disfluencies, which usually are not observed in children with typically developing speech or developmental stuttering. (Yairi & Ambrose, 2005).
8

Braun, Angelika, Nathalie Elsässer e Lea Willems. "Disfluencies Revisited—Are They Speaker-Specific?" Languages 8, n. 3 (26 giugno 2023): 155. http://dx.doi.org/10.3390/languages8030155.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The forensic application of phonetics relies on individuality in speech. In the forensic domain, individual patterns of verbal and paraverbal behavior are of interest which are readily available, measurable, consistent, and robust to disguise and to telephone transmission. This contribution is written from the perspective of the forensic phonetic practitioner and seeks to establish a more comprehensive concept of disfluency than previous studies have. A taxonomy of possible variables forming part of what can be termed disfluency behavior is outlined. It includes the “classical” fillers, but extends well beyond these, covering, among others, additional types of fillers as well as prolongations, but also the way in which fillers are combined with pauses. In the empirical section, the materials collected for an earlier study are re-examined and subjected to two different statistical procedures in an attempt to approach the issue of individuality. Recordings consist of several minutes of spontaneous speech by eight speakers on three different occasions. Beyond the established set of hesitation markers, additional aspects of disfluency behavior which fulfill the criteria outlined above are included in the analysis. The proportion of various types of disfluency markers is determined. Both statistical approaches suggest that these speakers can be distinguished at a level far above chance using the disfluency data. At the same time, the results show that it is difficult to pin down a single measure which characterizes the disfluency behavior of an individual speaker. The forensic implications of these findings are discussed.
9

Leonteva, A. V., O. V. Agafonova e A. A. Petrov. "DOES TIME MATTER? A MULTIMODAL ANALYSIS OF SI FROM L2 TO L1". Voprosy Kognitivnoy Lingvistiki, n. 3 (2023): 40–46. http://dx.doi.org/10.20916/1812-3228-2023-3-40-46.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Simultaneous interpreting is regarded as one of the most difficult and stressful types of activity. Simultaneous interpreters work in a severe time deficit and have to absorb much information per unit of time, which means that different cognitive processes (e.g., memory, attention, thinking, perception) are engaged concurrently. This leads to a severe cognitive load, which is sometimes compared with that of pilots. In the current study we investigate how the increase in cognitive load, which happens throughout time, affects interpreters’ performance. This surge, we expect, will be observed on two language levels: verbal and nonverbal (gestural). The analysis is based on 10 videos of simultaneous interpreting of a lecture about biodiversity from English (L2) into Russian (L1), approximately 10 minutes each. The results of the study show an increase in speech disfluencies on the verbal level and distribution of various gesture functions on the nonverbal level. In particular, verbal disfluencies are exteriorized in fillers, draggings and truncations, and their number increases during the interpreting. Along with disfluencies we observed the rise of the co-speech gestures, e.g., adapters and pragmatic gestures, that help maintain control over the process of simultaneous interpreting, structure the output and reduce the cognitive load and stress experienced by participants while performing the task.
10

Jutras, Benoît, Benoît Jutras, Josée Lagacé, Annik Lavigne, Andrée Boissonneault e Charlen Lavoie. "Auditory processing disorders, verbal disfluency, and learning difficulties: A case study". International Journal of Audiology 46, n. 1 (gennaio 2007): 31–38. http://dx.doi.org/10.1080/14992020601083321.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri

Tesi sul tema "Disfluence verbale":

1

Sheikh, Shakeel Ahmad. "Apprentissage profond pour la détection du bégaiement". Electronic Thesis or Diss., Université de Lorraine, 2023. http://www.theses.fr/2023LORR0005.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Le bégaiement est un trouble de la parole qui est le plus fréquemment observé parmi les troubles de la parole et se traduit par des comportements principaux. La fastidieuse tâche de détection et d'analyse des schémas de parole des Persons who stutter (PWS), dans le but de les rectifier, est souvent traitée manuellement par les orthophonistes et est biaisée par leurs croyances subjectives. De plus, les systèmes de reconnaissance automatique de la parole, «Automatic speech recognition (ASR)», ne parviennent pas non plus à reconnaître les bégaiements. Cela empêche les personnes handicapées d'accéder à des assistants numériques virtuels tels que Siri, Alexa, etc. Cette thèse tente de développer des systèmes de détection du bégaiement, Stuttering detection (SD), basés sur l'audio qui réussissent à capturer les différentes variabilités des énoncés de bégaiement telles que les styles d'expression, l'âge, les accents, etc., et apprennent des représentations robustes du bégaiement dans le but de fournir une évaluation juste, cohérente et impartiale de la parole bégayée. Alors que la plupart des systèmes SD existants utilisent plusieurs classificateurs binaires pour chaque type de bégaiement, nous présentons un système multi-classe unifié nommé StutterNet capable de détecter plusieurs types de bègues. En abordant le problème du déséquilibre des classes dans le domaine du bégaiement, nous avons étudié l'impact de l'application d'une fonction de perte pondérée et nous avons également présenté un StutterNet multi-contextuel pour améliorer la détection des types minoritaires. En exploitant les informations sur le locuteur et en supposant que les modèles de bégaiement sont invariants par rapport aux méta-données telles que les informations sur le locuteur, nous présentons un apprentissage multi-tâches «Multi-task learning (MTL)» SD qui apprend des représentations robustes discriminant le bégaiement et les invariantes par rapport au locuteur. En raison de la rareté des données non étiquetées, la tâche automatisée de détection du bégaiement est limitée dans son utilisation des modèles d'apprentissages profonds de grande taille lorsqu'il s'agit de capturer différentes variabilités. Nous avons introduit la toute première architecture d'apprentissage auto-supervisé «Self-supervised learning (SSL)», dans le domaine de la détection du bégaiement. L'architecture SSL entraîne d'abord un extracteur de caractéristiques pour une tâche de pré-texte en utilisant une grande quantité de données audio non étiquetées et non bégayantes pour capturer ces différentes variabilités. Puis elle applique l'extracteur de caractéristiques appris à une tâche SD en aval, en utilisant des données audio étiquetées limitées et bégayantes
Stuttering is a speech disorder that is most frequently observed among speech impairments and results in the form of core behaviours. The tedious and time-consuming task of detecting and analyzing speech patterns of PWS, with the goal of rectifying them is often handled manually by speech therapists, and is biased towards their subjective beliefs. Moreover, the ASR systems also fail to recognize the stuttered speech, which makes it impractical for PWS to access virtual digital assistants such as Siri, Alexa, etc.This thesis tries to develop audio based SD systems that successfully capture different variabilities from stuttering utterances such as speaking styles, age, accents, etc., and learns robust stuttering representations with an aim to provide a fair, consistent, and unbiased assessment of stuttered speech.While most of the existing SD systems use multiple binary classifiers for each stutter type, we present a unified multi-class StutterNet capable of detecting multiple stutter types. Approaching the class-imbalance problem in stuttering domain, we investigated the impact of applying weighted loss function, and, also presented Multi-contextual (MC) Multi-branch (MB) StutterNet to improve the detection performance of minority classes.Exploiting the speaker information with an assumption that the stuttering models should be invariant to meta-data such as speaker information, we present, an adversarial MTL SD method that learns robust stutter discrimintaive speaker-invariant representations.Due to paucity of unlabeled data, the automated SD task is limited in its use of large deep models in capturing different varaibilities, we introduced the first-ever SSL framework to SD domain. The SSL framework first trains a feature extractor for a pre-text task using a large quantity of unlabeled non-stuttering audio data to capture these different varaibilities, and then applies the learned feature extractor to a downstream SD task using limited labeled stuttering audio data
2

Biondi, Giulia Maria Rosa. "Analisi strumentale della produzione verbale nella disfluenze locutorie". Thesis, Universita' degli Studi di Catania, 2011. http://hdl.handle.net/10761/322.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Gli studi di brain imaging piu' recenti hanno documentato: iperattivazione di aree motorie, anomala lateralizzazione destra o attivazione bilaterale di aree tipicamente coinvolte a sinistra nei parlatori fluenti, attivazione addizionale di aree motorie e non motorie, assenza di attivazione uditiva bilateralmente ed anomalie nel processamento uditivo, assenza di attivazione nei gangli basali, differenze morfologiche cerebrali tra balbuzienti e normofluenti con anomalie anatomiche nelle aree per la parola ed il linguaggio. L'analisi acustica della produzione verbale dei balbuzienti ha ricevuto negli ultimi anni un interesse crescente da parte di diversi gruppi di ricerca in conseguenza del consolidarsi delle ipotesi che considerano il disturbo di fluenza conseguente un disturbo dei processi motori o sensomotori sottostanti la produzione verbale. Dopo una breve rassegna degli studi relativi agli aspetti acustici della parola del balbuziente con particolare riguardo ai parametri temporali viene proposta una batteria di test strumentali applicabile in ambito clinico. La batteria comprende in particolare il rilevo del Voice Reaction Time, del Voice Onset time e la registrazione della Mismatch Negativity.
3

Tian, Leimin. "Recognizing emotions in spoken dialogue with acoustic and lexical cues". Thesis, University of Edinburgh, 2018. http://hdl.handle.net/1842/31284.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Automatic emotion recognition has long been a focus of Affective Computing. It has become increasingly apparent that awareness of human emotions in Human-Computer Interaction (HCI) is crucial for advancing related technologies, such as dialogue systems. However, performance of current automatic emotion recognition is disappointing compared to human performance. Current research on emotion recognition in spoken dialogue focuses on identifying better feature representations and recognition models from a data-driven point of view. The goal of this thesis is to explore how incorporating prior knowledge of human emotion recognition in the automatic model can improve state-of-the-art performance of automatic emotion recognition in spoken dialogue. Specifically, we study this by proposing knowledge-inspired features representing occurrences of disfluency and non-verbal vocalisation in speech, and by building a multimodal recognition model that combines acoustic and lexical features in a knowledge-inspired hierarchical structure. In our study, emotions are represented with the Arousal, Expectancy, Power, and Valence emotion dimensions. We build unimodal and multimodal emotion recognition models to study the proposed features and modelling approach, and perform emotion recognition on both spontaneous and acted dialogue. Psycholinguistic studies have suggested that DISfluency and Non-verbal Vocalisation (DIS-NV) in dialogue is related to emotions. However, these affective cues in spoken dialogue are overlooked by current automatic emotion recognition research. Thus, we propose features for recognizing emotions in spoken dialogue which describe five types of DIS-NV in utterances, namely filled pause, filler, stutter, laughter, and audible breath. Our experiments show that this small set of features is predictive of emotions. Our DIS-NV features achieve better performance than benchmark acoustic and lexical features for recognizing all emotion dimensions in spontaneous dialogue. Consistent with Psycholinguistic studies, the DIS-NV features are especially predictive of the Expectancy dimension of emotion, which relates to speaker uncertainty. Our study illustrates the relationship between DIS-NVs and emotions in dialogue, which contributes to Psycholinguistic understanding of them as well. Note that our DIS-NV features are based on manual annotations, yet our long-term goal is to apply our emotion recognition model to HCI systems. Thus, we conduct preliminary experiments on automatic detection of DIS-NVs, and on using automatically detected DIS-NV features for emotion recognition. Our results show that DIS-NVs can be automatically detected from speech with stable accuracy, and auto-detected DIS-NV features remain predictive of emotions in spontaneous dialogue. This suggests that our emotion recognition model can be applied to a fully automatic system in the future, and holds the potential to improve the quality of emotional interaction in current HCI systems. To study the robustness of the DIS-NV features, we conduct cross-corpora experiments on both spontaneous and acted dialogue. We identify how dialogue type influences the performance of DIS-NV features and emotion recognition models. DIS-NVs contain additional information beyond acoustic characteristics or lexical contents. Thus, we study the gain of modality fusion for emotion recognition with the DIS-NV features. Previous work combines different feature sets by fusing modalities at the same level using two types of fusion strategies: Feature-Level (FL) fusion, which concatenates feature sets before recognition; and Decision-Level (DL) fusion, which makes the final decision based on outputs of all unimodal models. However, features from different modalities may describe data at different time scales or levels of abstraction. Moreover, Cognitive Science research indicates that when perceiving emotions, humans make use of information from different modalities at different cognitive levels and time steps. Therefore, we propose a HierarchicaL (HL) fusion strategy for multimodal emotion recognition, which incorporates features that describe data at a longer time interval or which are more abstract at higher levels of its knowledge-inspired hierarchy. Compared to FL and DL fusion, HL fusion incorporates both inter- and intra-modality differences. Our experiments show that HL fusion consistently outperforms FL and DL fusion on multimodal emotion recognition in both spontaneous and acted dialogue. The HL model combining our DIS-NV features with benchmark acoustic and lexical features improves current performance of multimodal emotion recognition in spoken dialogue. To study how other emotion-related tasks of spoken dialogue can benefit from the proposed approaches, we apply the DIS-NV features and the HL fusion strategy to recognize movie-induced emotions. Our experiments show that although designed for recognizing emotions in spoken dialogue, DIS-NV features and HL fusion remain effective for recognizing movie-induced emotions. This suggests that other emotion-related tasks can also benefit from the proposed features and model structure.
4

Breckinridge, Barbara LeDoux. ""The illegal alien" : how stereotypes in the media can undermine communication performance". Thesis, 2011. http://hdl.handle.net/2152/ETD-UT-2011-05-3524.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
This report explored the effects of stereotype threat—i.e., the apprehension associated with the possibility of confirming a self-relevant negative stereotype—on the stigmatized group Latinos as they were interviewed about their academic achievements and career aspirations. Latino participants were exposed to a self-relevant negative stereotype in the news, an illegal immigrant crossing the Mexican-American border smuggling drugs, as a stimulus activating stereotype threat. The study used deception as participants were unaware of the connection between the news article and the interview thus ensuring stereotype threat activation. Latino participants in the illegal immigrant/criminal condition displayed more verbal disfluency and tentative language than those in the control condition demonstrating evidence for media’s ability to stereotype threat.
text
5

Sousa, Clara Maria Oliveira de. "A pessoa humana e sua(s) circunstância(s) : bondade perante a diferença". Master's thesis, 2020. http://hdl.handle.net/10400.14/30782.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
A interpretação da bondade na linguagem comum parte, muitas vezes, de todas as ações que possam produzir um efeito benéfico através da abertura e da responsabilidade para com o Outro, recebendo o qualificativo de “bom”. Sob o ponto de vista cristão, a bondade é um atributo divino e, sem ela, nenhum(a) cristão(ã) pode realmente exercer eficazmente o seu ministério. Ela é o fundamento da própria fé e esperança que percorre desde os primórdios da criação, ato exacerbado da bondade de Deus, primeira e principal causa, perpassando pelos momentos mais grandiosos da história de Israel até ao mais alto grau, a redenção que custou o sangue do Seu Filho. Neste intento, leva-se a cabo um estudo que pretende, por um lado, construir uma proposta de investigação da conceção de bondade a partir da revisão da literatura efetuada, tendo como mote, um percurso de vida, na primeira pessoa, com manifesta disfluência verbal, e, por outro, estabelecer a sua prática para um desenvolvimento equilibrado de todas as dimensões da pessoa humana. A partir do enquadramento configurado, é apresentada uma proposta de lecionação da Unidade Letiva (UL) “Ter um Coração Bondoso” do Programa de Educação Moral e Religiosa Católica (EMRC) do 1.º ano de escolaridade, no contexto da Prática de Ensino Supervisionada (PES), em consonância com os valores educativos e cristãos desta disciplina, no espaço de uma integral educação cristã, para tornar a sociedade mais bela, mais bondosa e mais justa.
Interpretation of kindness in ordinary language often comes from all actions that can have a beneficial effect throughout openness and responsibility towards the other, being qualified as “good”. From the Christian point of view kindness is a divine attribute and without it no Christian can really exercise his ministry effectively. It is the foundation of the very faith and hope that runs from the dawn of creation, an exacerbated act of God's kindness, first and foremost cause, going through the greatest moments in Israel's history to the highest degree, the redemption that cost His Son’s blood. For this purpose, a study is carried out that aims, on the one hand, to construct a proposal for the investigation of the conception of kindness based on the literature review, having as its motto a life path, in the first person, with a manifest speech disfluency, and on the other hand, to establish its practice for a balanced development of all dimensions of the human being. From the configured framework is presented a teaching proposal of the Teaching Unit (TL) “Have a Kind Heart” from the Catholic Religious and Moral Education Program (CRME) of the 1st grade, in a context of Supervised Teaching Practice (STP), in line with the educational and Christian values of this subject, within the framework of a holistic Christian education, to build a more beautiful, kinder and fairer society.

Libri sul tema "Disfluence verbale":

1

Smagac, Melanie. The effect of age of target audience and verbal knowledge on disfluencies and speech characteristics. Subury, Ont: Laurentian University, 2005.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri

Atti di convegni sul tema "Disfluence verbale":

1

Farzana, Shahla, Ashwin Deshpande e Natalie Parde. "How You Say It Matters: Measuring the Impact of Verbal Disfluency Tags on Automated Dementia Detection". In Proceedings of the 21st Workshop on Biomedical Language Processing. Stroudsburg, PA, USA: Association for Computational Linguistics, 2022. http://dx.doi.org/10.18653/v1/2022.bionlp-1.4.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri

Vai alla bibliografia