Academic literature on the topic 'Gesture Synthesis'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Gesture Synthesis.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Gesture Synthesis"
Pang, Kunkun, Dafei Qin, Yingruo Fan, Julian Habekost, Takaaki Shiratori, Junichi Yamagishi, and Taku Komura. "BodyFormer: Semantics-guided 3D Body Gesture Synthesis with Transformer." ACM Transactions on Graphics 42, no. 4 (July 26, 2023): 1–12. http://dx.doi.org/10.1145/3592456.
Full textDeng, Linhai. "FPGA-based gesture recognition and voice interaction." Applied and Computational Engineering 40, no. 1 (February 21, 2024): 174–79. http://dx.doi.org/10.54254/2755-2721/40/20230646.
Full textAo, Tenglong, Qingzhe Gao, Yuke Lou, Baoquan Chen, and Libin Liu. "Rhythmic Gesticulator." ACM Transactions on Graphics 41, no. 6 (November 30, 2022): 1–19. http://dx.doi.org/10.1145/3550454.3555435.
Full textYang, Qi, and Georg Essl. "Evaluating Gesture-Augmented Keyboard Performance." Computer Music Journal 38, no. 4 (December 2014): 68–79. http://dx.doi.org/10.1162/comj_a_00277.
Full textSouza, Fernando, and Adolfo Maia Jr. "A Mathematical, Graphical and Visual Approach to Granular Synthesis Composition." Revista Vórtex 9, no. 2 (December 10, 2021): 1–27. http://dx.doi.org/10.33871/23179937.2021.9.2.4.
Full textBouënard, Alexandre, Marcelo M. M. Wanderley, and Sylvie Gibet. "Gesture Control of Sound Synthesis: Analysis and Classification of Percussion Gestures." Acta Acustica united with Acustica 96, no. 4 (July 1, 2010): 668–77. http://dx.doi.org/10.3813/aaa.918321.
Full textHe, Zhiyuan. "Automatic Quality Assessment of Speech-Driven Synthesized Gestures." International Journal of Computer Games Technology 2022 (March 16, 2022): 1–11. http://dx.doi.org/10.1155/2022/1828293.
Full textXu, Zunnan, Yachao Zhang, Sicheng Yang, Ronghui Li, and Xiu Li. "Chain of Generation: Multi-Modal Gesture Synthesis via Cascaded Conditional Control." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 6 (March 24, 2024): 6387–95. http://dx.doi.org/10.1609/aaai.v38i6.28458.
Full textFernández-Baena, Adso, Raúl Montaño, Marc Antonijoan, Arturo Roversi, David Miralles, and Francesc Alías. "Gesture synthesis adapted to speech emphasis." Speech Communication 57 (February 2014): 331–50. http://dx.doi.org/10.1016/j.specom.2013.06.005.
Full textNakano, Atsushi, and Junichi Hoshino. "Composite conversation gesture synthesis using layered planning." Systems and Computers in Japan 38, no. 10 (2007): 58–68. http://dx.doi.org/10.1002/scj.20532.
Full textDissertations / Theses on the topic "Gesture Synthesis"
Faggi, Simone. "An Evaluation Model For Speech-Driven Gesture Synthesis." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/22844/.
Full textMarrin, Nakra Teresa (Teresa Anne) 1970. "Inside the conductor's jacket : analysis, interpretation and musical synthesis of expressive gesture." Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/9165.
Full textIncludes bibliographical references (leaves 154-167).
We present the design and implementation of the Conductor's Jacket, a unique wearable device that measures physiological and gestural signals, together with the Gesture Construction, a musical software system that interprets these signals and applies them expressively in a musical context. Sixteen sensors have been incorporated into the Conductor's Jacket in such a way as to not encumber or interfere with the gestures of a working orchestra conductor. The Conductor's Jacket system gathers up to sixteen data channels reliably at rates of 3 kHz per channel, and also provides mcal-time graphical feedback. Unlike many gesture-sensing systems it not only gathers positional and accelerational data but also senses muscle tension from several locations on each arm. The Conductor's Jacket was used to gather conducting data from six subjects, three professional conductors and three students, during twelve hours of rehearsals and performances. Analyses of the data yielded thirty-five significant features that seem to reflect intuitive and natural gestural tendencies, including context-based hand switching, anticipatory 'flatlining' effects, and correlations between respiration and phrasing. The results indicate that muscle tension and respiration signals reflect several significant and expressive characteristics of a conductor's gestures. From these results we present nine hypotheses about human musical expression, including ideas about efficiency, intentionality, polyphony, signal-to-noise ratios, and musical flow state. Finally, this thesis describes the Gesture Construction, a musical software system that analyzes and performs music in real-time based on the performer's gestures and breathing signals. A bank of software filters extracts several of the features that were found in the conductor study, including beat intensities and the alternation between arms. These features are then used to generate real-time expressive effects by shaping the beats, tempos, articulations, dynamics, and note lengths in a musical score.
by Teresa Marrin Nakra.
Ph.D.
Pun, James Chi-Him. "Gesture recognition with application in music arrangement." Diss., University of Pretoria, 2006. http://upetd.up.ac.za/thesis/available/etd-11052007-171910/.
Full textWang, Yizhong Johnty. "Investigation of gesture control for articulatory speech synthesis with a bio-mechanical mapping layer." Thesis, University of British Columbia, 2012. http://hdl.handle.net/2429/43193.
Full textPérez, Carrillo Alfonso Antonio. "Enhacing spectral sintesis techniques with performance gestures using the violin as a case study." Doctoral thesis, Universitat Pompeu Fabra, 2009. http://hdl.handle.net/10803/7264.
Full textThoret, Etienne. "Caractérisation acoustique des relations entre les mouvements biologiques et la perception sonore : application au contrôle de la synthèse et à l'apprentissage de gestes." Thesis, Aix-Marseille, 2014. http://www.theses.fr/2014AIXM4780/document.
Full textThis thesis focused on the relations between biological movements and auditory perception in considering the specific case of graphical movements and the friction sounds they produced. The originality of this work lies in the use of sound synthesis processes that are based on a perceptual paradigm and that can be controlled by gesture models. The present synthesis model made it possible to generate acoustic stimuli which timbre was directly modulated by the velocity variations induced by a graphic gesture in order to exclusively focus on the perceptual influence of this transformational invariant. A first study showed that we can recognize the biological motion kinematics (the 1/3 power law) and discriminate simple geometric shapes simply by listening to the timbre variations of friction sounds that solely evoke velocity variations. A second study revealed the existence of dynamic prototypes characterized by sounds corresponding to the most representative elliptic trajectory, thus revealing that prototypical shapes may emerged from sensorimotor coupling. A final study showed that the kinematics evoked by friction sounds may significantly affect the dynamic and geometric dimension in the visuo-motor coupling. This shed critical light on the relevance of auditory perception in the multisensory integration of continuous motion in a situation never explored. All of these theoretical results enabled the gestural control of sound synthesis models from a gestural description and the creation of sonification tools for gesture learning and rehabilitation of a graphomotor disease, dysgraphia
Devaney, Jason Wayne. "A study of articulatory gestures for speech synthesis." Thesis, University of Liverpool, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.284254.
Full textMétois, Eric. "Musical sound information : musical gestures and embedding synthesis." Thesis, Massachusetts Institute of Technology, 1997. http://hdl.handle.net/1721.1/29125.
Full textVigliensoni, Martin Augusto. "Touchless gestural control of concatenative sound synthesis." Thesis, McGill University, 2011. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=104846.
Full textCe mémoire de thèse présente une nouvelle interface pour l'expression musicale combinant la synthèse sonore par concaténation et les technologies de captation de mouvements dans l'espace. Ce travail commence par une présentation des dispositifs de capture de position de type main-libre, en étudiant leur principes de fonctionnement et leur caractéristiques. Des exemples de leur application dans les contextes musicaux sont aussi étudiés. Une attention toute particulière est accordée à quatre systèmes: leurs spécifications techniques ainsi que leurs performances (évaluées par des métriques quantitatives) sont comparées expérimentalement. Ensuite, la synthèse concaténative est décrite. Cette technique de synthèse sonore consiste à synthéthiser une séquence musicale cible à partir de sons pré-enregistrés, sélectionnés et concaténés en fonction de leur adéquation avec la cible. Trois implémentations de cette technique sont comparées, permettant ainsi d'en choisir une pour notre application. Enfin, nous décrivons SoundCloud, une nouvelle interface qui, en ajoutant une interface visuelle à la méthode de synthèse concaténative, permet d'en étendre les possibilités de contrôle. SoundCloud permet en effet de contrôler la synthése de sons en utilisant des gestes libres des mains pour naviguer au sein d'un espace tridimensionnel de descripteurs des sons d'une base de données.
Maestre, Gómez Esteban. "Modeling instrumental gestures: an analysis/synthesis framework for violin bowing." Doctoral thesis, Universitat Pompeu Fabra, 2009. http://hdl.handle.net/10803/7562.
Full textThis work presents a methodology for modeling instrumental gestures in excitation-continuous musical instruments. In particular, it approaches bowing control in violin classical performance. Nearly non-intrusive sensing techniques are introduced and applied for accurately acquiring relevant timbre-related bowing control parameter signals and constructing a performance database. By defining a vocabulary of bowing parameter envelopes, the contours of bow velocity, bow pressing force, and bow-bridge distance are modeled as sequences of Bézier cubic curve segments, yielding a robust parameterization that is well suited for reconstructing original contours with significant fidelity. An analysis/synthesis statistical modeling framework is constructed from a database of parameterized contours of bowing controls, enabling a flexible mapping between score annotations and bowing parameter envelopes. The framework is used for score-based generation of synthetic bowing parameter contours through a bow planning algorithm able to reproduce possible constraints imposed by the finite length of the bow. Rendered bowing control signals are successfully applied to automatic performance by being used for driving offline violin sound generation through two of the most extended techniques: digital waveguide physical modeling, and sample-based synthesis.
Books on the topic "Gesture Synthesis"
Bernstein, Zachary. Thinking In and About Music. Oxford University Press, 2021. http://dx.doi.org/10.1093/oso/9780190949235.001.0001.
Full textBennett, Christopher. Grace, Freedom, and the Expression of Emotion. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780198766858.003.0010.
Full textSilver, Morris. Economic Structures of Antiquity. Greenwood Publishing Group, Inc., 1995. http://dx.doi.org/10.5040/9798400643606.
Full textBook chapters on the topic "Gesture Synthesis"
Losson, Olivier, and Jean-Marc Vannobel. "Sign Specification and Synthesis." In Gesture-Based Communication in Human-Computer Interaction, 239–51. Berlin, Heidelberg: Springer Berlin Heidelberg, 1999. http://dx.doi.org/10.1007/3-540-46616-9_21.
Full textNeff, Michael. "Hand Gesture Synthesis for Conversational Characters." In Handbook of Human Motion, 2201–12. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-14418-4_5.
Full textNeff, Michael. "Hand Gesture Synthesis for Conversational Characters." In Handbook of Human Motion, 1–12. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-30808-1_5-1.
Full textOlivier, Patrick. "Gesture Synthesis in a Real-World ECA." In Lecture Notes in Computer Science, 319–22. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-24842-2_35.
Full textWachsmuth, Ipke, and Stefan Kopp. "Lifelike Gesture Synthesis and Timing for Conversational Agents." In Gesture and Sign Language in Human-Computer Interaction, 120–33. Berlin, Heidelberg: Springer Berlin Heidelberg, 2002. http://dx.doi.org/10.1007/3-540-47873-6_13.
Full textHartmann, Björn, Maurizio Mancini, and Catherine Pelachaud. "Implementing Expressive Gesture Synthesis for Embodied Conversational Agents." In Lecture Notes in Computer Science, 188–99. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11678816_22.
Full textJulliard, Frédéric, and Sylvie Gibet. "Reactiva’Motion Project: Motion Synthesis Based on a Reactive Representation." In Gesture-Based Communication in Human-Computer Interaction, 265–68. Berlin, Heidelberg: Springer Berlin Heidelberg, 1999. http://dx.doi.org/10.1007/3-540-46616-9_23.
Full textArfib, Daniel, and Loïc Kessous. "Gestural Control of Sound Synthesis and Processing Algorithms." In Gesture and Sign Language in Human-Computer Interaction, 285–95. Berlin, Heidelberg: Springer Berlin Heidelberg, 2002. http://dx.doi.org/10.1007/3-540-47873-6_30.
Full textZhang, Fan, Naye Ji, Fuxing Gao, and Yongping Li. "DiffMotion: Speech-Driven Gesture Synthesis Using Denoising Diffusion Model." In MultiMedia Modeling, 231–42. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-27077-2_18.
Full textCrombie Smith, Kirsty, and William Edmondson. "The Development of a Computational Notation for Synthesis of Sign and Gesture." In Gesture-Based Communication in Human-Computer Interaction, 312–23. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-24598-8_29.
Full textConference papers on the topic "Gesture Synthesis"
Bargmann, Robert, Volker Blanz, and Hans-Peter Seidel. "A nonlinear viseme model for triphone-based speech synthesis." In Gesture Recognition (FG). IEEE, 2008. http://dx.doi.org/10.1109/afgr.2008.4813362.
Full textSargin, M. E., O. Aran, A. Karpov, F. Ofli, Y. Yasinnik, S. Wilson, E. Erzin, Y. Yemez, and A. M. Tekalp. "Combined Gesture-Speech Analysis and Speech Driven Gesture Synthesis." In 2006 IEEE International Conference on Multimedia and Expo. IEEE, 2006. http://dx.doi.org/10.1109/icme.2006.262663.
Full textLu, Shuhong, Youngwoo Yoon, and Andrew Feng. "Co-Speech Gesture Synthesis using Discrete Gesture Token Learning." In 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2023. http://dx.doi.org/10.1109/iros55552.2023.10342027.
Full textWang, Siyang, Simon Alexanderson, Joakim Gustafson, Jonas Beskow, Gustav Eje Henter, and Éva Székely. "Integrated Speech and Gesture Synthesis." In ICMI '21: INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION. New York, NY, USA: ACM, 2021. http://dx.doi.org/10.1145/3462244.3479914.
Full textBreidt, Martin, Heinrich H. Biilthoff, and Cristobal Curio. "Robust semantic analysis by synthesis of 3D facial motion." In Gesture Recognition (FG 2011). IEEE, 2011. http://dx.doi.org/10.1109/fg.2011.5771336.
Full textLiu, Kang, and Joern Ostermann. "Realistic head motion synthesis for an image-based talking head." In Gesture Recognition (FG 2011). IEEE, 2011. http://dx.doi.org/10.1109/fg.2011.5771384.
Full textGunes, Hatice, Bjorn Schuller, Maja Pantic, and Roddy Cowie. "Emotion representation, analysis and synthesis in continuous space: A survey." In Gesture Recognition (FG 2011). IEEE, 2011. http://dx.doi.org/10.1109/fg.2011.5771357.
Full textLiu, Kang, and Joern Ostermann. "Realistic head motion synthesis for an image-based talking head." In Gesture Recognition (FG 2011). IEEE, 2011. http://dx.doi.org/10.1109/fg.2011.5771401.
Full textHan, Huijian, Rongjun Song, and Yanqiang Fu. "One Algorithm of Gesture Animation Synthesis." In 2016 12th International Conference on Computational Intelligence and Security (CIS). IEEE, 2016. http://dx.doi.org/10.1109/cis.2016.0091.
Full textLee, Chan-Su, and Dimitris Samaras. "Analysis and synthesis of facial expressions using decomposable nonlinear generative models." In Gesture Recognition (FG 2011). IEEE, 2011. http://dx.doi.org/10.1109/fg.2011.5771360.
Full text