Gotowa bibliografia na temat „Speech imagery”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Speech imagery”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Artykuły w czasopismach na temat "Speech imagery"
Scott, Mark. "Speech imagery recalibrates speech-perception boundaries". Attention, Perception, & Psychophysics 78, nr 5 (11.04.2016): 1496–511. http://dx.doi.org/10.3758/s13414-016-1087-6.
Pełny tekst źródłaRaru, Gregorius. "TUTURAN RITUAL HAMBOR HAJU PADA MASYARAKAT MANGGARAI SEBUAH KAJIAN LINGUISTIK KEBUDAYAAN". Paradigma, Jurnal Kajian Budaya 6, nr 1 (25.08.2016): 28. http://dx.doi.org/10.17510/paradigma.v6i1.79.
Pełny tekst źródłaSHERGILL, S. S., E. T. BULLMORE, M. J. BRAMMER, S. C. R. WILLIAMS, R. M. MURRAY i P. K. McGUIRE. "A functional study of auditory verbal imagery". Psychological Medicine 31, nr 2 (luty 2001): 241–53. http://dx.doi.org/10.1017/s003329170100335x.
Pełny tekst źródłaWANG, Yongli, Shengnan GE, LantinHuang Lancy, Qin WAN i Haidan LU. "Neural mechanism of speech imagery". Advances in Psychological Science 31, nr 4 (2023): 608. http://dx.doi.org/10.3724/sp.j.1042.2023.00608.
Pełny tekst źródłaKoizumi, Shinichi. "Effects of imagery ability end speech anxiety on imagery vividness of imaginary of speech scene." Japanese journal of psychology 68, nr 3 (1997): 203–8. http://dx.doi.org/10.4992/jjpsy.68.203.
Pełny tekst źródłaRosida, Ana. "A COMPARATIVE STUDY OF POETRY’S STRUCTURE: ‘NIGHT’ BY BLAKE AND ‘SHE WALKS IN BEUATY’ BY BYRON". JENTERA: Jurnal Kajian Sastra 6, nr 2 (28.12.2017): 142. http://dx.doi.org/10.26499/jentera.v6i2.435.
Pełny tekst źródłaMcGuire, P. K., D. A. Silbersweig, R. M. Murray, A. S. David, R. S. J. Frackowiak i C. D. Frith. "Functional anatomy of inner speech and auditory verbal imagery". Psychological Medicine 26, nr 1 (styczeń 1996): 29–38. http://dx.doi.org/10.1017/s0033291700033699.
Pełny tekst źródłaHastuti, Nur, i Sri Rezeki Ayuni. "Citraan dan Majas dalam Lirik Lagu "Harehare Ya" Karya Maigo Hanyuu Kajian Stilistika". IZUMI 12, nr 1 (5.05.2023): 1–12. http://dx.doi.org/10.14710/izumi.12.1.1-12.
Pełny tekst źródłaNajwa Fadilanitaa, Khofifah Indar F i Arneta Destria. "DIKSI, CITRAAN, DAN MAJAS PADA PUISI ’’AKU MENUNGGU BUNGA’’ KARYA HERI ISNAINI’’". Protasis: Jurnal Bahasa, Sastra, Budaya, dan Pengajarannya 1, nr 1 (28.06.2022): 70–75. http://dx.doi.org/10.55606/protasis.v1i1.26.
Pełny tekst źródłaChengaiyan, Sandhya, Divya Balathayil, Kavitha Anandan i Christy Bobby Thomas. "Effect of Power and Phase Synchronization in Multi-Trial Speech Imagery". International Journal of Software Science and Computational Intelligence 10, nr 4 (październik 2018): 44–61. http://dx.doi.org/10.4018/ijssci.2018100104.
Pełny tekst źródłaRozprawy doktorskie na temat "Speech imagery"
Scott, Mark. "Speech imagery as corollary discharge". Thesis, University of British Columbia, 2012. http://hdl.handle.net/2429/42231.
Pełny tekst źródłaMcCord, Walter White. "The contribution of agricultural imagery to the interpretation of Amos". Theological Research Exchange Network (TREN), 1996. http://www.tren.com.
Pełny tekst źródłaPeixoto, Michael Viana. "PrÃtica intersemiÃtica no discurso imagÃtico-cancional de Adriana Calcanhotto: uma proposta de anÃlise". Universidade Federal do CearÃ, 2014. http://www.teses.ufc.br/tde_busca/arquivo.php?codArquivo=11914.
Pełny tekst źródłaA presente tese âPrÃtica intersemiÃtica no Discurso ImagÃtico-Cancional de Adriana Calcanhotto: uma proposta de anÃliseâ estuda e define o discurso imagÃtico-cancional como uma prÃtica discursiva que mobiliza, atravÃs de procedimentos discursivos, linguagens de diferentes modalidades (tanto de natureza verbal quanto nÃo-verbal) para, num processo intersemiÃtico, compatibilizarem-se com a produÃÃo literomusical e, a partir disso, propiciarem a construÃÃo de sentidos. O alicerce teÃrico no qual fincamos esse conceito procede da AnÃlise do Discurso de linha francesa, considerando, sobretudo, as reflexÃes de Maingueneau (1999), Costa (2012), dentre outros. Com base nisso, a questÃo norteadora da pesquisa: Que proposta de abordagem teÃrica e metodolÃgica para uma anÃlise do discurso imagÃtico-cancional pode ser elaborada a partir das categorias discursivas? A operacionalizaÃÃo dessa questÃo e do objetivo se deu por meio da metodologia exploratÃria, em que, a partir de um corpus especÃfico â a produÃÃo literomusical de Adriana Calcanhotto compreendida entre 1990 e 2000, cuja delimitaÃÃo se deu por ordem cronolÃgica, a fim de se perceber como as respectivas produÃÃes se organizam de acordo com o inÃcio e o encerramento da dÃcada; a partir da apropriaÃÃo das categorias discursivas, cancionais e visuais, elaboramos um guia que propÃe um percurso que viabilize a construÃÃo dos sentidos do texto. à luz dessa metodologia, procedemos o exercÃcio de anÃlise dos dados o qual nos permitiu a conclusÃo de que a natureza interdiscursiva do discurso imagÃtico-cancional propicia sentidos tais que sà os sÃo possÃveis devido ao fenÃmeno da intersemioticidade que se estabelece e que . Essa conclusÃo nos possibilita afirmar que, em virtude disso, hà que promover um letramento verbo-visual; tais construÃÃes discursivas requerem do leitor uma aprendizagem acerca do modo de ler determinadas produÃÃes discursivas.
This thesis " intersemiotic Practice in Speech - Cancional imagery of Adriana Calcanhotto: a proposed analysis " studies and sets the image- cancional discourse as a discursive practice that mobilizes through discursive procedures , languages of different modalities (both verbal nature as nonverbal) to, in intersemiotic process compatibilizarem with the literomusical production and , from that , they encourage the construction of meaning. The theoretical foundation on which fincamos this concept comes from the analysis of French Discourse, considering especially the reflections of Maingueneau (1999), Costa (2012), among others. Based on this, the guiding research question: What proposal for theoretical and methodological approach to an analysis of image- cancional speech can be compiled from the discursive categories? The operationalization of this issue and the goal was through the exploratory methodology, in which, from a specific corpus - the literomusical production of Adriana Calcanhotto between 1990 and 2000, whose limits given in chronological order, in order to realize as their productions are organized according to the opening and closure of the decade; from the appropriation of discursive and visual cancionais , categories prepared a guide that offers a path that makes possible the construction of meanings of the text . In light of this methodology , we proceed to the performance analysis of the data which allowed us to conclude that the nature of the image- interdiscursive cancional speech provides such meanings that are only possible due to the phenomenon of intersemioticidade that is established and that. This conclusion allows us to state that, because of this, there is a verb that promote visual literacy; such discursive constructions require the reader learning about the way of reading certain discursive productions.
Nalborczyk, Ladislas. "Understanding rumination as a form of inner speech : probing the role of motor processes". Thesis, Université Grenoble Alpes (ComUE), 2019. http://www.theses.fr/2019GREAS017/document.
Pełny tekst źródłaRumination is known to be a predominantly verbal process and has been proposed to be considered as such as a dysfunctional form of inner speech (i.e., the silent production of words in one’s mind). On the other hand, research on the psychophysiology of inner speech revealed that the neural processes involved in overt and covert speech tend to be very similar. This is coherent with the idea that some forms of inner speech could be considered as a kind of simulation of overt speech, in the same way as imagined actions can be considered as the result of a simulation of the corresponding overt action (e.g., walking and imagined walking). In other words, the motor simulation hypothesis suggests that the speech motor system should be involved as well during inner speech production. The corollary hypothesis might be drawn, according to which the production of inner speech (and rumination) should be disrupted by a disruption of the speech motor system. We conducted a series of five studies aiming to probe the role of the speech motor system in rumination. Overall, our results highlight that although verbal rumination may be considered as a form of inner speech, it might not specifically involve the speech motor system. More precisely, we argue that rumination might be considered as a particularly strongly condensed form of inner speech that does not systematically involve fully specified articulatory features. We discuss these findings in relation to the habit-goal framework of depressive rumination and we discuss the implications of these findings for theories of inner speech production
Hofmann, Petra. "Infernal imagery in Anglo-Saxon charters". Thesis, St Andrews, 2008. http://hdl.handle.net/10023/498.
Pełny tekst źródłaWendel, Sue M. "Insights into the Mental Imagery and Gestural Awareness of Representational Gestures Produced in Everyday Talk: An Exploratory Study of Using Participants' Comments as Data". PDXScholar, 2015. https://pdxscholar.library.pdx.edu/open_access_etds/2646.
Pełny tekst źródłaRunnals, Jennifer Jane. "Exploring the Cardiovascular Response to Anger Imagery and Speech in Vietnam Veterans With and Without Posttraumatic Stress Disorder". Also available to VCU users online at:, 2007. http://hdl.handle.net/10156/1882.
Pełny tekst źródłaHung, Pei-Fang. "Mental imagery and idiom understanding in adults: Examining dual coding theory". Thesis, University of Oregon, 2010. http://hdl.handle.net/1794/10878.
Pełny tekst źródłaThis study examined idiom understanding in 120 neurologically healthy adults, ages 20-29 (20s Group), 40-49 (40s Group), 60-69 (60s Group), and 80-89 (80s Group) years old. Each participant was administered a familiarity task, definition explanation task, mental imagery task, and forced-choice comprehension task. Twenty idioms, 10 transparent and 10 opaque, were used with no supporting contexts. Participants were asked to rate the familiarity of each idiom, to provide a definition of each, to generate a mental image of each, and to select the best definition of each from among four options. It was predicted that younger and older adults would perform equally well on the comprehension task but that older adults would perform poorer than younger adults on the explanation task. Additionally, mental imagery of idioms was expected to become more figurative with advancing age, and participants were expected to perform better on highly familiar and transparent idioms than on less familiar and opaque ones. Participants rated all 20 idioms as highly familiar, with the lowest familiarity rating for participants in the 20s Group. No significant differences were found on the forced-choice comprehension task across the four age groups although the 20s Group scored the lowest among all age groups. The 60s Group performed significantly better than the 20s Group on the definition explanation task, but no significant differences were found between the other age groups. No significant differences were found in generating mental images between transparent and opaque idioms, and mental images tended to be figurative rather than literal for both types of idioms. The present study adds to our knowledge of idiom understanding across adulthood. Familiarity seemed to play a stronger role than transparency in idiom understanding in adults. Once an idiom was learned and stored as a lexical unit, people used the idiomatic meaning and generated figurative mental imagery immediately without accessing the literal meaning or the literal mental image.
Committee in charge: Marilyn Nippold, Chairperson, Special Education and Clinical Sciences; Roland Good, Member, Special Education and Clinical Sciences; Deborah Olson, Member, Special Education and Clinical Sciences; Nathaniel Teich, Outside Member, English
Diedrichs, Victoria Anne. "Leveraging Pupillometry and Luminance-Based Mental Imagery for a Novel Mode of Communication". Master's thesis, Temple University Libraries, 2015. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/352749.
Pełny tekst źródłaM.A.
The aim of the present study was to characterize participants’ abilities to answer binary yes/no questions by mentally manipulating imagery to produce imagined changes in luminance, which would in turn cause reflexive perturbations in pupil diameter. First, a paired association was established with participants, linking “yes” responses with imagining a “sunny sky” and “no” responses with imagining a “dark room”. Participants (N=20) then answered 16 yes/no questions using this response method, in place of providing verbal or gestural (e.g., head nod) answers. Pupil diameters were recorded for a period of 8000 ms following each stimulus question while participants maintained the mental image that corresponded with their answer. We hypothesized that on average, “no” responses would yield a pupil dilation and increased diameter relative to baseline, while “yes” responses would instead result in constrictions and smaller pupil diameters compared to baseline. A 2-factor repeated measures analysis of variance (ANOVA), where time was one factor and response type (i.e., yes or no) was the other, revealed a statistically significant interaction of time and response type, a significant main effect of time, and a trend toward significance for response type in aggregated group data. Item level discrimination consisted of comparing the mean pupil diameter in response to a single item for a single participant (e.g., “yes” response on one trial) to the mean pupil diameter of all contrasting responses for that same participant (e.g., all “no” response trials). This method achieved a 64.5% discrimination accuracy. This investigation affirmed the plausibility of leveraging pupillometry and luminance-based mental imagery in favor of an alternative communication system for individuals who are locked-in, as well as its potential as a screening tool. However, further investigation is warranted prior to its implementation.
Temple University--Theses
Major, Mary Elizabeth. "War's Visual Discourse| A Content Analysis of Iraq War Imagery". Thesis, Portland State University, 2013. http://pqdtopen.proquest.com/#viewpdf?dispub=1535957.
Pełny tekst źródłaThis study reports the findings of a systematic visual content analysis of 356 randomly sampled images published about the Iraq War in Time, Newsweek, and U.S. News and World Report from 2003-2009. In comparison to a 1995 Gulf War study, published images in all three newsmagazines continued to be U.S.-centric, with the highest content frequencies reflected in the categories U.S. troops on combat patrol, Iraqi civilians, and U.S. political leaders respectively. These content categories do not resemble the results of the Gulf War study in which armaments garnered the largest share of the images with 23%.
This study concludes that embedding photojournalists, in addition to media economics, governance, and the media-organizational culture, restricted an accurate representation of the Iraq War and its consequences. Embedding allowed more access to both troops and civilians than the journalistic pool system of the Gulf War, which stationed the majority of journalists in Saudi Arabia and allowed only a few journalists into Iraq with the understanding they would share information. However, the perceived opportunity by journalists to more thoroughly cover the war through the policy of embedding was not realized to the extent they had hoped for. The embed protocols acted more as an indirect form of censorship.
Książki na temat "Speech imagery"
Imagery for preaching. Minneapolis: Fortress Press, 1989.
Znajdź pełny tekst źródłaJohn Dryden's imagery. USA: Univ. Presses of Florida, 1989.
Znajdź pełny tekst źródłaLord, Jennifer L. Finding language and imagery: Words for holy speech. Minneapolis, MN: Fortress Press, 2009.
Znajdź pełny tekst źródłaFinding language and imagery: Words for holy speech. Minneapolis, MN: Fortress Press, 2009.
Znajdź pełny tekst źródłaLord, Jennifer L. Finding language and imagery: Words for holy speech. Minneapolis, MN: Fortress Press, 2009.
Znajdź pełny tekst źródłaM, Singh B. Water-imagery in Yeats's works. New Delhi: Anmol Publications, 1992.
Znajdź pełny tekst źródłaSuscavage, Charlene E. Calderón: The imagery of tragedy. New York: Peter Lang, 1991.
Znajdź pełny tekst źródłaKhristova, Evdokii͡a. Kak vŭzpriemame rechta. Sofii͡a: Universitetsko izd-vo "Kliment Okhridski", 1988.
Znajdź pełny tekst źródłaCollins, Christopher. Reading the written image: Verbal play, interpretation, and the roots of iconophobia. University Park, Pa: Pennsylvania State University Press, 1991.
Znajdź pełny tekst źródłaThe architecture of imagery in Alberto Moravia's fiction. Chapel Hill: U.N.C. Dept. of Romance Languages, 1993.
Znajdź pełny tekst źródłaCzęści książek na temat "Speech imagery"
McNeill, David, Karl-Erik McCullough, Francis Quek, Susan Duncan, Robert Bryll, Xin-Feng Ma i Rashid Ansari. "Dynamic Imagery in Speech and Gesture". W Text, Speech and Language Technology, 27–44. Dordrecht: Springer Netherlands, 2002. http://dx.doi.org/10.1007/978-94-017-2367-1_3.
Pełny tekst źródłaSikdar, Debdeep, Rinku Roy i Manjunatha Mahadevappa. "Chaos Analysis of Speech Imagery of IPA Vowels". W Intelligent Human Computer Interaction, 101–10. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-04021-5_10.
Pełny tekst źródłaPatel, Jigar, i Syed Abudhagir Umar. "Detection of Imagery Vowel Speech Using Deep Learning". W Lecture Notes in Electrical Engineering, 237–47. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-1476-7_23.
Pełny tekst źródłaLonsdale, Steven H. "Pursuit and Attack: Reversals in Hunting Imagery". W Creatures of Speech Lion, Herding, and Hunting Similes in the Iliad, 85–102. Wiesbaden: Vieweg+Teubner Verlag, 1990. http://dx.doi.org/10.1007/978-3-663-12001-8_7.
Pełny tekst źródłaLonsdale, Steven H. "Conclusions: Animal Imagery in the Homeric Narrative". W Creatures of Speech Lion, Herding, and Hunting Similes in the Iliad, 103–28. Wiesbaden: Vieweg+Teubner Verlag, 1990. http://dx.doi.org/10.1007/978-3-663-12001-8_8.
Pełny tekst źródłaPaaß, Gerhard, i Sven Giesselbach. "Foundation Models for Speech, Images, Videos, and Control". W Artificial Intelligence: Foundations, Theory, and Algorithms, 313–82. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-23190-2_7.
Pełny tekst źródłaGruber, Ivan, Marek Hrúz, Miloš Železný i Alexey Karpov. "X-Bridge: Image-to-Image Translation with Reconstruction Capabilities". W Speech and Computer, 238–49. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-87802-3_22.
Pełny tekst źródłaPaulus, Dietrich W. R., i Joachim Hornegger. "Speech Recognition". W Pattern Recognition of Images and Speech in C++, 329–53. Wiesbaden: Vieweg+Teubner Verlag, 1997. http://dx.doi.org/10.1007/978-3-663-13991-1_25.
Pełny tekst źródłaBlanchet, Gérard, i Maurice Charbit. "Speech Processing". W Digital Signal and Image Processing Using Matlab®, 105–30. Hoboken, NJ, USA: John Wiley & Sons, Inc., 2015. http://dx.doi.org/10.1002/9781118999592.ch5.
Pełny tekst źródłaBureš, Lukáš, Petr Neduchal, Miroslav Hlaváč i Marek Hrúz. "Generation of Synthetic Images of Full-Text Documents". W Speech and Computer, 68–75. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-99579-3_8.
Pełny tekst źródłaStreszczenia konferencji na temat "Speech imagery"
Li Wang, Xiong Zhang i Yu Zhang. "Extending motor imagery by speech imagery for brain-computer interface". W 2013 35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). IEEE, 2013. http://dx.doi.org/10.1109/embc.2013.6611183.
Pełny tekst źródłaManuel Macias-Macias, Jose, Juan Alberto Ramirez-Quintana, Graciela Ramirez-Alonso i Mario Ignacio Chacon-Murguia. "Deep Learning Networks for Vowel Speech Imagery". W 2020 17th International Conference on Electrical Engineering, Computing Science and Automatic Control (CCE). IEEE, 2020. http://dx.doi.org/10.1109/cce50788.2020.9299143.
Pełny tekst źródłaGurkok, Hayrettin, Mannes Poel i Job Zwiers. "Classifying motor imagery in presence of speech". W 2010 International Joint Conference on Neural Networks (IJCNN). IEEE, 2010. http://dx.doi.org/10.1109/ijcnn.2010.5595733.
Pełny tekst źródłaWilliams, David P. "Image-quality prediction of synthetic aperture sonar imagery". W 2010 IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, 2010. http://dx.doi.org/10.1109/icassp.2010.5495165.
Pełny tekst źródłaChoi, Jaehoon, Netiwit Kaongoen i Sungho Jo. "Investigation on Effect of Speech Imagery EEG Data Augmentation with Actual Speech". W 2022 10th International Winter Conference on Brain-Computer Interface (BCI). IEEE, 2022. http://dx.doi.org/10.1109/bci53720.2022.9735108.
Pełny tekst źródłaIdrees, Basil M., i Omar Farooq. "Vowel classification using wavelet decomposition during speech imagery". W 2016 3rd International Conference on Signal Processing and Integrated Networks (SPIN). IEEE, 2016. http://dx.doi.org/10.1109/spin.2016.7566774.
Pełny tekst źródłaNgamrassameewong, Sansit, Vichaya Manatchinapisit i Yodchanan Wongsawat. "Improvement of Motor Imagery BCI using Silent Speech". W 2020 17th International Conference on Electrical Engineering/Electronics, Computer, Telecommunications and Information Technology (ECTI-CON). IEEE, 2020. http://dx.doi.org/10.1109/ecti-con49241.2020.9158081.
Pełny tekst źródłaSikdar, Debdeep, Rinku Roy, Koushik Bakshi i Manjunatha Mahadevappa. "Multifractal Analysis of Speech Imagery of IPA Vowels". W 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). IEEE, 2018. http://dx.doi.org/10.1109/embc.2018.8512579.
Pełny tekst źródłaSandhya, C., G. Srinidhi, R. Vaishali, M. Visali i A. Kavitha. "Analysis of speech imagery using brain connectivity estimators". W 2015 IEEE 14th International Conference on Cognitive Informatics & Cognitive Computing (ICCI*CC). IEEE, 2015. http://dx.doi.org/10.1109/icci-cc.2015.7259410.
Pełny tekst źródłaGuven, Erhan, i Peter Bock. "Speech Emotion Recognition using a backward context". W 2010 IEEE Applied Imagery Pattern Recognition Workshop (AIPR 2010). IEEE, 2010. http://dx.doi.org/10.1109/aipr.2010.5759701.
Pełny tekst źródłaRaporty organizacyjne na temat "Speech imagery"
WEHLBURG, JOSEPH C., CHRISTINE M. WEHLBURG, JODY L. SMITH, OLGA B. SPAHN, MARK W. SMITH i CRAIG M. BONEY. High Speed 2D Hadamard Transform Spectral Imager. Office of Scientific and Technical Information (OSTI), luty 2003. http://dx.doi.org/10.2172/808596.
Pełny tekst źródłaMaddocks, Sophie. Image-Based Abuse: A Threat to Privacy, Safety, and Speech. MediaWell, Social Science Research Council, kwiecień 2023. http://dx.doi.org/10.35650/mw.3051.d.2023.
Pełny tekst źródłaJenkins, Charles M., Yasuyuki Horie, Robert C. Ripley i William H. Wilson. Explosively Driven Particle Fields Imaged Using a High-Speed Framing Camera and Particle Image Velocimetry. Fort Belvoir, VA: Defense Technical Information Center, sierpień 2011. http://dx.doi.org/10.21236/ada548954.
Pełny tekst źródłaTao, Yang, Amos Mizrach, Victor Alchanatis, Nachshon Shamir i Tom Porter. Automated imaging broiler chicksexing for gender-specific and efficient production. United States Department of Agriculture, grudzień 2014. http://dx.doi.org/10.32747/2014.7594391.bard.
Pełny tekst źródłaWashington Nichols, Bruno, i Pedro Chapaval Pimentel. Impeachment e imagem pública: uma análise do discurso vazado de Michel Temer / Impeachment and public image: an analysis of Michel Temer’s leaked speech. Revista Internacional de Relaciones Públicas, czerwiec 2017. http://dx.doi.org/10.5783/rirp-13-2017-04-41-60.
Pełny tekst źródłaБережна, Маргарита Василівна. Psycholinguistic Image of Joy (in the Computer-Animated Film Inside Out). Psycholinguistics in a Modern World, 2021. http://dx.doi.org/10.31812/123456789/5827.
Pełny tekst źródłaБережна, Маргарита Василівна. Maleficent: from the Matriarch to the Scorned Woman (Psycholinguistic Image). Baltija Publishing, 2021. http://dx.doi.org/10.31812/123456789/5766.
Pełny tekst źródłaSullivan, Gary D., i Andrea M. Faucette. High-Speed Image Recognition Control System. Fort Belvoir, VA: Defense Technical Information Center, kwiecień 2001. http://dx.doi.org/10.21236/ada389666.
Pełny tekst źródłaБережна, Маргарита Василівна. The Destroyer Psycholinguistic Archetype. Baltija Publishing, 2021. http://dx.doi.org/10.31812/123456789/6036.
Pełny tekst źródłaLee, Jingeol. Measurements of granular flow dynamics with high speed digital images. Office of Scientific and Technical Information (OSTI), styczeń 1994. http://dx.doi.org/10.2172/425294.
Pełny tekst źródła