Literatura académica sobre el tema "Gesture Synthesis"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Gesture Synthesis".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Artículos de revistas sobre el tema "Gesture Synthesis"
Pang, Kunkun, Dafei Qin, Yingruo Fan, Julian Habekost, Takaaki Shiratori, Junichi Yamagishi y Taku Komura. "BodyFormer: Semantics-guided 3D Body Gesture Synthesis with Transformer". ACM Transactions on Graphics 42, n.º 4 (26 de julio de 2023): 1–12. http://dx.doi.org/10.1145/3592456.
Texto completoDeng, Linhai. "FPGA-based gesture recognition and voice interaction". Applied and Computational Engineering 40, n.º 1 (21 de febrero de 2024): 174–79. http://dx.doi.org/10.54254/2755-2721/40/20230646.
Texto completoAo, Tenglong, Qingzhe Gao, Yuke Lou, Baoquan Chen y Libin Liu. "Rhythmic Gesticulator". ACM Transactions on Graphics 41, n.º 6 (30 de noviembre de 2022): 1–19. http://dx.doi.org/10.1145/3550454.3555435.
Texto completoYang, Qi y Georg Essl. "Evaluating Gesture-Augmented Keyboard Performance". Computer Music Journal 38, n.º 4 (diciembre de 2014): 68–79. http://dx.doi.org/10.1162/comj_a_00277.
Texto completoSouza, Fernando y Adolfo Maia Jr. "A Mathematical, Graphical and Visual Approach to Granular Synthesis Composition". Revista Vórtex 9, n.º 2 (10 de diciembre de 2021): 1–27. http://dx.doi.org/10.33871/23179937.2021.9.2.4.
Texto completoBouënard, Alexandre, Marcelo M. M. Wanderley y Sylvie Gibet. "Gesture Control of Sound Synthesis: Analysis and Classification of Percussion Gestures". Acta Acustica united with Acustica 96, n.º 4 (1 de julio de 2010): 668–77. http://dx.doi.org/10.3813/aaa.918321.
Texto completoHe, Zhiyuan. "Automatic Quality Assessment of Speech-Driven Synthesized Gestures". International Journal of Computer Games Technology 2022 (16 de marzo de 2022): 1–11. http://dx.doi.org/10.1155/2022/1828293.
Texto completoXu, Zunnan, Yachao Zhang, Sicheng Yang, Ronghui Li y Xiu Li. "Chain of Generation: Multi-Modal Gesture Synthesis via Cascaded Conditional Control". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 6 (24 de marzo de 2024): 6387–95. http://dx.doi.org/10.1609/aaai.v38i6.28458.
Texto completoFernández-Baena, Adso, Raúl Montaño, Marc Antonijoan, Arturo Roversi, David Miralles y Francesc Alías. "Gesture synthesis adapted to speech emphasis". Speech Communication 57 (febrero de 2014): 331–50. http://dx.doi.org/10.1016/j.specom.2013.06.005.
Texto completoNakano, Atsushi y Junichi Hoshino. "Composite conversation gesture synthesis using layered planning". Systems and Computers in Japan 38, n.º 10 (2007): 58–68. http://dx.doi.org/10.1002/scj.20532.
Texto completoTesis sobre el tema "Gesture Synthesis"
Faggi, Simone. "An Evaluation Model For Speech-Driven Gesture Synthesis". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/22844/.
Texto completoMarrin, Nakra Teresa (Teresa Anne) 1970. "Inside the conductor's jacket : analysis, interpretation and musical synthesis of expressive gesture". Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/9165.
Texto completoIncludes bibliographical references (leaves 154-167).
We present the design and implementation of the Conductor's Jacket, a unique wearable device that measures physiological and gestural signals, together with the Gesture Construction, a musical software system that interprets these signals and applies them expressively in a musical context. Sixteen sensors have been incorporated into the Conductor's Jacket in such a way as to not encumber or interfere with the gestures of a working orchestra conductor. The Conductor's Jacket system gathers up to sixteen data channels reliably at rates of 3 kHz per channel, and also provides mcal-time graphical feedback. Unlike many gesture-sensing systems it not only gathers positional and accelerational data but also senses muscle tension from several locations on each arm. The Conductor's Jacket was used to gather conducting data from six subjects, three professional conductors and three students, during twelve hours of rehearsals and performances. Analyses of the data yielded thirty-five significant features that seem to reflect intuitive and natural gestural tendencies, including context-based hand switching, anticipatory 'flatlining' effects, and correlations between respiration and phrasing. The results indicate that muscle tension and respiration signals reflect several significant and expressive characteristics of a conductor's gestures. From these results we present nine hypotheses about human musical expression, including ideas about efficiency, intentionality, polyphony, signal-to-noise ratios, and musical flow state. Finally, this thesis describes the Gesture Construction, a musical software system that analyzes and performs music in real-time based on the performer's gestures and breathing signals. A bank of software filters extracts several of the features that were found in the conductor study, including beat intensities and the alternation between arms. These features are then used to generate real-time expressive effects by shaping the beats, tempos, articulations, dynamics, and note lengths in a musical score.
by Teresa Marrin Nakra.
Ph.D.
Pun, James Chi-Him. "Gesture recognition with application in music arrangement". Diss., University of Pretoria, 2006. http://upetd.up.ac.za/thesis/available/etd-11052007-171910/.
Texto completoWang, Yizhong Johnty. "Investigation of gesture control for articulatory speech synthesis with a bio-mechanical mapping layer". Thesis, University of British Columbia, 2012. http://hdl.handle.net/2429/43193.
Texto completoPérez, Carrillo Alfonso Antonio. "Enhacing spectral sintesis techniques with performance gestures using the violin as a case study". Doctoral thesis, Universitat Pompeu Fabra, 2009. http://hdl.handle.net/10803/7264.
Texto completoThoret, Etienne. "Caractérisation acoustique des relations entre les mouvements biologiques et la perception sonore : application au contrôle de la synthèse et à l'apprentissage de gestes". Thesis, Aix-Marseille, 2014. http://www.theses.fr/2014AIXM4780/document.
Texto completoThis thesis focused on the relations between biological movements and auditory perception in considering the specific case of graphical movements and the friction sounds they produced. The originality of this work lies in the use of sound synthesis processes that are based on a perceptual paradigm and that can be controlled by gesture models. The present synthesis model made it possible to generate acoustic stimuli which timbre was directly modulated by the velocity variations induced by a graphic gesture in order to exclusively focus on the perceptual influence of this transformational invariant. A first study showed that we can recognize the biological motion kinematics (the 1/3 power law) and discriminate simple geometric shapes simply by listening to the timbre variations of friction sounds that solely evoke velocity variations. A second study revealed the existence of dynamic prototypes characterized by sounds corresponding to the most representative elliptic trajectory, thus revealing that prototypical shapes may emerged from sensorimotor coupling. A final study showed that the kinematics evoked by friction sounds may significantly affect the dynamic and geometric dimension in the visuo-motor coupling. This shed critical light on the relevance of auditory perception in the multisensory integration of continuous motion in a situation never explored. All of these theoretical results enabled the gestural control of sound synthesis models from a gestural description and the creation of sonification tools for gesture learning and rehabilitation of a graphomotor disease, dysgraphia
Devaney, Jason Wayne. "A study of articulatory gestures for speech synthesis". Thesis, University of Liverpool, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.284254.
Texto completoMétois, Eric. "Musical sound information : musical gestures and embedding synthesis". Thesis, Massachusetts Institute of Technology, 1997. http://hdl.handle.net/1721.1/29125.
Texto completoVigliensoni, Martin Augusto. "Touchless gestural control of concatenative sound synthesis". Thesis, McGill University, 2011. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=104846.
Texto completoCe mémoire de thèse présente une nouvelle interface pour l'expression musicale combinant la synthèse sonore par concaténation et les technologies de captation de mouvements dans l'espace. Ce travail commence par une présentation des dispositifs de capture de position de type main-libre, en étudiant leur principes de fonctionnement et leur caractéristiques. Des exemples de leur application dans les contextes musicaux sont aussi étudiés. Une attention toute particulière est accordée à quatre systèmes: leurs spécifications techniques ainsi que leurs performances (évaluées par des métriques quantitatives) sont comparées expérimentalement. Ensuite, la synthèse concaténative est décrite. Cette technique de synthèse sonore consiste à synthéthiser une séquence musicale cible à partir de sons pré-enregistrés, sélectionnés et concaténés en fonction de leur adéquation avec la cible. Trois implémentations de cette technique sont comparées, permettant ainsi d'en choisir une pour notre application. Enfin, nous décrivons SoundCloud, une nouvelle interface qui, en ajoutant une interface visuelle à la méthode de synthèse concaténative, permet d'en étendre les possibilités de contrôle. SoundCloud permet en effet de contrôler la synthése de sons en utilisant des gestes libres des mains pour naviguer au sein d'un espace tridimensionnel de descripteurs des sons d'une base de données.
Maestre, Gómez Esteban. "Modeling instrumental gestures: an analysis/synthesis framework for violin bowing". Doctoral thesis, Universitat Pompeu Fabra, 2009. http://hdl.handle.net/10803/7562.
Texto completoThis work presents a methodology for modeling instrumental gestures in excitation-continuous musical instruments. In particular, it approaches bowing control in violin classical performance. Nearly non-intrusive sensing techniques are introduced and applied for accurately acquiring relevant timbre-related bowing control parameter signals and constructing a performance database. By defining a vocabulary of bowing parameter envelopes, the contours of bow velocity, bow pressing force, and bow-bridge distance are modeled as sequences of Bézier cubic curve segments, yielding a robust parameterization that is well suited for reconstructing original contours with significant fidelity. An analysis/synthesis statistical modeling framework is constructed from a database of parameterized contours of bowing controls, enabling a flexible mapping between score annotations and bowing parameter envelopes. The framework is used for score-based generation of synthetic bowing parameter contours through a bow planning algorithm able to reproduce possible constraints imposed by the finite length of the bow. Rendered bowing control signals are successfully applied to automatic performance by being used for driving offline violin sound generation through two of the most extended techniques: digital waveguide physical modeling, and sample-based synthesis.
Libros sobre el tema "Gesture Synthesis"
Bernstein, Zachary. Thinking In and About Music. Oxford University Press, 2021. http://dx.doi.org/10.1093/oso/9780190949235.001.0001.
Texto completoBennett, Christopher. Grace, Freedom, and the Expression of Emotion. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780198766858.003.0010.
Texto completoSilver, Morris. Economic Structures of Antiquity. Greenwood Publishing Group, Inc., 1995. http://dx.doi.org/10.5040/9798400643606.
Texto completoCapítulos de libros sobre el tema "Gesture Synthesis"
Losson, Olivier y Jean-Marc Vannobel. "Sign Specification and Synthesis". En Gesture-Based Communication in Human-Computer Interaction, 239–51. Berlin, Heidelberg: Springer Berlin Heidelberg, 1999. http://dx.doi.org/10.1007/3-540-46616-9_21.
Texto completoNeff, Michael. "Hand Gesture Synthesis for Conversational Characters". En Handbook of Human Motion, 2201–12. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-14418-4_5.
Texto completoNeff, Michael. "Hand Gesture Synthesis for Conversational Characters". En Handbook of Human Motion, 1–12. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-30808-1_5-1.
Texto completoOlivier, Patrick. "Gesture Synthesis in a Real-World ECA". En Lecture Notes in Computer Science, 319–22. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-24842-2_35.
Texto completoWachsmuth, Ipke y Stefan Kopp. "Lifelike Gesture Synthesis and Timing for Conversational Agents". En Gesture and Sign Language in Human-Computer Interaction, 120–33. Berlin, Heidelberg: Springer Berlin Heidelberg, 2002. http://dx.doi.org/10.1007/3-540-47873-6_13.
Texto completoHartmann, Björn, Maurizio Mancini y Catherine Pelachaud. "Implementing Expressive Gesture Synthesis for Embodied Conversational Agents". En Lecture Notes in Computer Science, 188–99. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11678816_22.
Texto completoJulliard, Frédéric y Sylvie Gibet. "Reactiva’Motion Project: Motion Synthesis Based on a Reactive Representation". En Gesture-Based Communication in Human-Computer Interaction, 265–68. Berlin, Heidelberg: Springer Berlin Heidelberg, 1999. http://dx.doi.org/10.1007/3-540-46616-9_23.
Texto completoArfib, Daniel y Loïc Kessous. "Gestural Control of Sound Synthesis and Processing Algorithms". En Gesture and Sign Language in Human-Computer Interaction, 285–95. Berlin, Heidelberg: Springer Berlin Heidelberg, 2002. http://dx.doi.org/10.1007/3-540-47873-6_30.
Texto completoZhang, Fan, Naye Ji, Fuxing Gao y Yongping Li. "DiffMotion: Speech-Driven Gesture Synthesis Using Denoising Diffusion Model". En MultiMedia Modeling, 231–42. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-27077-2_18.
Texto completoCrombie Smith, Kirsty y William Edmondson. "The Development of a Computational Notation for Synthesis of Sign and Gesture". En Gesture-Based Communication in Human-Computer Interaction, 312–23. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-24598-8_29.
Texto completoActas de conferencias sobre el tema "Gesture Synthesis"
Bargmann, Robert, Volker Blanz y Hans-Peter Seidel. "A nonlinear viseme model for triphone-based speech synthesis". En Gesture Recognition (FG). IEEE, 2008. http://dx.doi.org/10.1109/afgr.2008.4813362.
Texto completoSargin, M. E., O. Aran, A. Karpov, F. Ofli, Y. Yasinnik, S. Wilson, E. Erzin, Y. Yemez y A. M. Tekalp. "Combined Gesture-Speech Analysis and Speech Driven Gesture Synthesis". En 2006 IEEE International Conference on Multimedia and Expo. IEEE, 2006. http://dx.doi.org/10.1109/icme.2006.262663.
Texto completoLu, Shuhong, Youngwoo Yoon y Andrew Feng. "Co-Speech Gesture Synthesis using Discrete Gesture Token Learning". En 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2023. http://dx.doi.org/10.1109/iros55552.2023.10342027.
Texto completoWang, Siyang, Simon Alexanderson, Joakim Gustafson, Jonas Beskow, Gustav Eje Henter y Éva Székely. "Integrated Speech and Gesture Synthesis". En ICMI '21: INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION. New York, NY, USA: ACM, 2021. http://dx.doi.org/10.1145/3462244.3479914.
Texto completoBreidt, Martin, Heinrich H. Biilthoff y Cristobal Curio. "Robust semantic analysis by synthesis of 3D facial motion". En Gesture Recognition (FG 2011). IEEE, 2011. http://dx.doi.org/10.1109/fg.2011.5771336.
Texto completoLiu, Kang y Joern Ostermann. "Realistic head motion synthesis for an image-based talking head". En Gesture Recognition (FG 2011). IEEE, 2011. http://dx.doi.org/10.1109/fg.2011.5771384.
Texto completoGunes, Hatice, Bjorn Schuller, Maja Pantic y Roddy Cowie. "Emotion representation, analysis and synthesis in continuous space: A survey". En Gesture Recognition (FG 2011). IEEE, 2011. http://dx.doi.org/10.1109/fg.2011.5771357.
Texto completoLiu, Kang y Joern Ostermann. "Realistic head motion synthesis for an image-based talking head". En Gesture Recognition (FG 2011). IEEE, 2011. http://dx.doi.org/10.1109/fg.2011.5771401.
Texto completoHan, Huijian, Rongjun Song y Yanqiang Fu. "One Algorithm of Gesture Animation Synthesis". En 2016 12th International Conference on Computational Intelligence and Security (CIS). IEEE, 2016. http://dx.doi.org/10.1109/cis.2016.0091.
Texto completoLee, Chan-Su y Dimitris Samaras. "Analysis and synthesis of facial expressions using decomposable nonlinear generative models". En Gesture Recognition (FG 2011). IEEE, 2011. http://dx.doi.org/10.1109/fg.2011.5771360.
Texto completo