Auswahl der wissenschaftlichen Literatur zum Thema „Gesture Synthesis“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Inhaltsverzeichnis
Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Gesture Synthesis" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Zeitschriftenartikel zum Thema "Gesture Synthesis"
Pang, Kunkun, Dafei Qin, Yingruo Fan, Julian Habekost, Takaaki Shiratori, Junichi Yamagishi und Taku Komura. „BodyFormer: Semantics-guided 3D Body Gesture Synthesis with Transformer“. ACM Transactions on Graphics 42, Nr. 4 (26.07.2023): 1–12. http://dx.doi.org/10.1145/3592456.
Der volle Inhalt der QuelleDeng, Linhai. „FPGA-based gesture recognition and voice interaction“. Applied and Computational Engineering 40, Nr. 1 (21.02.2024): 174–79. http://dx.doi.org/10.54254/2755-2721/40/20230646.
Der volle Inhalt der QuelleAo, Tenglong, Qingzhe Gao, Yuke Lou, Baoquan Chen und Libin Liu. „Rhythmic Gesticulator“. ACM Transactions on Graphics 41, Nr. 6 (30.11.2022): 1–19. http://dx.doi.org/10.1145/3550454.3555435.
Der volle Inhalt der QuelleYang, Qi, und Georg Essl. „Evaluating Gesture-Augmented Keyboard Performance“. Computer Music Journal 38, Nr. 4 (Dezember 2014): 68–79. http://dx.doi.org/10.1162/comj_a_00277.
Der volle Inhalt der QuelleSouza, Fernando, und Adolfo Maia Jr. „A Mathematical, Graphical and Visual Approach to Granular Synthesis Composition“. Revista Vórtex 9, Nr. 2 (10.12.2021): 1–27. http://dx.doi.org/10.33871/23179937.2021.9.2.4.
Der volle Inhalt der QuelleBouënard, Alexandre, Marcelo M. M. Wanderley und Sylvie Gibet. „Gesture Control of Sound Synthesis: Analysis and Classification of Percussion Gestures“. Acta Acustica united with Acustica 96, Nr. 4 (01.07.2010): 668–77. http://dx.doi.org/10.3813/aaa.918321.
Der volle Inhalt der QuelleHe, Zhiyuan. „Automatic Quality Assessment of Speech-Driven Synthesized Gestures“. International Journal of Computer Games Technology 2022 (16.03.2022): 1–11. http://dx.doi.org/10.1155/2022/1828293.
Der volle Inhalt der QuelleXu, Zunnan, Yachao Zhang, Sicheng Yang, Ronghui Li und Xiu Li. „Chain of Generation: Multi-Modal Gesture Synthesis via Cascaded Conditional Control“. Proceedings of the AAAI Conference on Artificial Intelligence 38, Nr. 6 (24.03.2024): 6387–95. http://dx.doi.org/10.1609/aaai.v38i6.28458.
Der volle Inhalt der QuelleFernández-Baena, Adso, Raúl Montaño, Marc Antonijoan, Arturo Roversi, David Miralles und Francesc Alías. „Gesture synthesis adapted to speech emphasis“. Speech Communication 57 (Februar 2014): 331–50. http://dx.doi.org/10.1016/j.specom.2013.06.005.
Der volle Inhalt der QuelleNakano, Atsushi, und Junichi Hoshino. „Composite conversation gesture synthesis using layered planning“. Systems and Computers in Japan 38, Nr. 10 (2007): 58–68. http://dx.doi.org/10.1002/scj.20532.
Der volle Inhalt der QuelleDissertationen zum Thema "Gesture Synthesis"
Faggi, Simone. „An Evaluation Model For Speech-Driven Gesture Synthesis“. Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/22844/.
Der volle Inhalt der QuelleMarrin, Nakra Teresa (Teresa Anne) 1970. „Inside the conductor's jacket : analysis, interpretation and musical synthesis of expressive gesture“. Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/9165.
Der volle Inhalt der QuelleIncludes bibliographical references (leaves 154-167).
We present the design and implementation of the Conductor's Jacket, a unique wearable device that measures physiological and gestural signals, together with the Gesture Construction, a musical software system that interprets these signals and applies them expressively in a musical context. Sixteen sensors have been incorporated into the Conductor's Jacket in such a way as to not encumber or interfere with the gestures of a working orchestra conductor. The Conductor's Jacket system gathers up to sixteen data channels reliably at rates of 3 kHz per channel, and also provides mcal-time graphical feedback. Unlike many gesture-sensing systems it not only gathers positional and accelerational data but also senses muscle tension from several locations on each arm. The Conductor's Jacket was used to gather conducting data from six subjects, three professional conductors and three students, during twelve hours of rehearsals and performances. Analyses of the data yielded thirty-five significant features that seem to reflect intuitive and natural gestural tendencies, including context-based hand switching, anticipatory 'flatlining' effects, and correlations between respiration and phrasing. The results indicate that muscle tension and respiration signals reflect several significant and expressive characteristics of a conductor's gestures. From these results we present nine hypotheses about human musical expression, including ideas about efficiency, intentionality, polyphony, signal-to-noise ratios, and musical flow state. Finally, this thesis describes the Gesture Construction, a musical software system that analyzes and performs music in real-time based on the performer's gestures and breathing signals. A bank of software filters extracts several of the features that were found in the conductor study, including beat intensities and the alternation between arms. These features are then used to generate real-time expressive effects by shaping the beats, tempos, articulations, dynamics, and note lengths in a musical score.
by Teresa Marrin Nakra.
Ph.D.
Pun, James Chi-Him. „Gesture recognition with application in music arrangement“. Diss., University of Pretoria, 2006. http://upetd.up.ac.za/thesis/available/etd-11052007-171910/.
Der volle Inhalt der QuelleWang, Yizhong Johnty. „Investigation of gesture control for articulatory speech synthesis with a bio-mechanical mapping layer“. Thesis, University of British Columbia, 2012. http://hdl.handle.net/2429/43193.
Der volle Inhalt der QuellePérez, Carrillo Alfonso Antonio. „Enhacing spectral sintesis techniques with performance gestures using the violin as a case study“. Doctoral thesis, Universitat Pompeu Fabra, 2009. http://hdl.handle.net/10803/7264.
Der volle Inhalt der QuelleThoret, Etienne. „Caractérisation acoustique des relations entre les mouvements biologiques et la perception sonore : application au contrôle de la synthèse et à l'apprentissage de gestes“. Thesis, Aix-Marseille, 2014. http://www.theses.fr/2014AIXM4780/document.
Der volle Inhalt der QuelleThis thesis focused on the relations between biological movements and auditory perception in considering the specific case of graphical movements and the friction sounds they produced. The originality of this work lies in the use of sound synthesis processes that are based on a perceptual paradigm and that can be controlled by gesture models. The present synthesis model made it possible to generate acoustic stimuli which timbre was directly modulated by the velocity variations induced by a graphic gesture in order to exclusively focus on the perceptual influence of this transformational invariant. A first study showed that we can recognize the biological motion kinematics (the 1/3 power law) and discriminate simple geometric shapes simply by listening to the timbre variations of friction sounds that solely evoke velocity variations. A second study revealed the existence of dynamic prototypes characterized by sounds corresponding to the most representative elliptic trajectory, thus revealing that prototypical shapes may emerged from sensorimotor coupling. A final study showed that the kinematics evoked by friction sounds may significantly affect the dynamic and geometric dimension in the visuo-motor coupling. This shed critical light on the relevance of auditory perception in the multisensory integration of continuous motion in a situation never explored. All of these theoretical results enabled the gestural control of sound synthesis models from a gestural description and the creation of sonification tools for gesture learning and rehabilitation of a graphomotor disease, dysgraphia
Devaney, Jason Wayne. „A study of articulatory gestures for speech synthesis“. Thesis, University of Liverpool, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.284254.
Der volle Inhalt der QuelleMétois, Eric. „Musical sound information : musical gestures and embedding synthesis“. Thesis, Massachusetts Institute of Technology, 1997. http://hdl.handle.net/1721.1/29125.
Der volle Inhalt der QuelleVigliensoni, Martin Augusto. „Touchless gestural control of concatenative sound synthesis“. Thesis, McGill University, 2011. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=104846.
Der volle Inhalt der QuelleCe mémoire de thèse présente une nouvelle interface pour l'expression musicale combinant la synthèse sonore par concaténation et les technologies de captation de mouvements dans l'espace. Ce travail commence par une présentation des dispositifs de capture de position de type main-libre, en étudiant leur principes de fonctionnement et leur caractéristiques. Des exemples de leur application dans les contextes musicaux sont aussi étudiés. Une attention toute particulière est accordée à quatre systèmes: leurs spécifications techniques ainsi que leurs performances (évaluées par des métriques quantitatives) sont comparées expérimentalement. Ensuite, la synthèse concaténative est décrite. Cette technique de synthèse sonore consiste à synthéthiser une séquence musicale cible à partir de sons pré-enregistrés, sélectionnés et concaténés en fonction de leur adéquation avec la cible. Trois implémentations de cette technique sont comparées, permettant ainsi d'en choisir une pour notre application. Enfin, nous décrivons SoundCloud, une nouvelle interface qui, en ajoutant une interface visuelle à la méthode de synthèse concaténative, permet d'en étendre les possibilités de contrôle. SoundCloud permet en effet de contrôler la synthése de sons en utilisant des gestes libres des mains pour naviguer au sein d'un espace tridimensionnel de descripteurs des sons d'une base de données.
Maestre, Gómez Esteban. „Modeling instrumental gestures: an analysis/synthesis framework for violin bowing“. Doctoral thesis, Universitat Pompeu Fabra, 2009. http://hdl.handle.net/10803/7562.
Der volle Inhalt der QuelleThis work presents a methodology for modeling instrumental gestures in excitation-continuous musical instruments. In particular, it approaches bowing control in violin classical performance. Nearly non-intrusive sensing techniques are introduced and applied for accurately acquiring relevant timbre-related bowing control parameter signals and constructing a performance database. By defining a vocabulary of bowing parameter envelopes, the contours of bow velocity, bow pressing force, and bow-bridge distance are modeled as sequences of Bézier cubic curve segments, yielding a robust parameterization that is well suited for reconstructing original contours with significant fidelity. An analysis/synthesis statistical modeling framework is constructed from a database of parameterized contours of bowing controls, enabling a flexible mapping between score annotations and bowing parameter envelopes. The framework is used for score-based generation of synthetic bowing parameter contours through a bow planning algorithm able to reproduce possible constraints imposed by the finite length of the bow. Rendered bowing control signals are successfully applied to automatic performance by being used for driving offline violin sound generation through two of the most extended techniques: digital waveguide physical modeling, and sample-based synthesis.
Bücher zum Thema "Gesture Synthesis"
Bernstein, Zachary. Thinking In and About Music. Oxford University Press, 2021. http://dx.doi.org/10.1093/oso/9780190949235.001.0001.
Der volle Inhalt der QuelleBennett, Christopher. Grace, Freedom, and the Expression of Emotion. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780198766858.003.0010.
Der volle Inhalt der QuelleSilver, Morris. Economic Structures of Antiquity. Greenwood Publishing Group, Inc., 1995. http://dx.doi.org/10.5040/9798400643606.
Der volle Inhalt der QuelleBuchteile zum Thema "Gesture Synthesis"
Losson, Olivier, und Jean-Marc Vannobel. „Sign Specification and Synthesis“. In Gesture-Based Communication in Human-Computer Interaction, 239–51. Berlin, Heidelberg: Springer Berlin Heidelberg, 1999. http://dx.doi.org/10.1007/3-540-46616-9_21.
Der volle Inhalt der QuelleNeff, Michael. „Hand Gesture Synthesis for Conversational Characters“. In Handbook of Human Motion, 2201–12. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-14418-4_5.
Der volle Inhalt der QuelleNeff, Michael. „Hand Gesture Synthesis for Conversational Characters“. In Handbook of Human Motion, 1–12. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-30808-1_5-1.
Der volle Inhalt der QuelleOlivier, Patrick. „Gesture Synthesis in a Real-World ECA“. In Lecture Notes in Computer Science, 319–22. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-24842-2_35.
Der volle Inhalt der QuelleWachsmuth, Ipke, und Stefan Kopp. „Lifelike Gesture Synthesis and Timing for Conversational Agents“. In Gesture and Sign Language in Human-Computer Interaction, 120–33. Berlin, Heidelberg: Springer Berlin Heidelberg, 2002. http://dx.doi.org/10.1007/3-540-47873-6_13.
Der volle Inhalt der QuelleHartmann, Björn, Maurizio Mancini und Catherine Pelachaud. „Implementing Expressive Gesture Synthesis for Embodied Conversational Agents“. In Lecture Notes in Computer Science, 188–99. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11678816_22.
Der volle Inhalt der QuelleJulliard, Frédéric, und Sylvie Gibet. „Reactiva’Motion Project: Motion Synthesis Based on a Reactive Representation“. In Gesture-Based Communication in Human-Computer Interaction, 265–68. Berlin, Heidelberg: Springer Berlin Heidelberg, 1999. http://dx.doi.org/10.1007/3-540-46616-9_23.
Der volle Inhalt der QuelleArfib, Daniel, und Loïc Kessous. „Gestural Control of Sound Synthesis and Processing Algorithms“. In Gesture and Sign Language in Human-Computer Interaction, 285–95. Berlin, Heidelberg: Springer Berlin Heidelberg, 2002. http://dx.doi.org/10.1007/3-540-47873-6_30.
Der volle Inhalt der QuelleZhang, Fan, Naye Ji, Fuxing Gao und Yongping Li. „DiffMotion: Speech-Driven Gesture Synthesis Using Denoising Diffusion Model“. In MultiMedia Modeling, 231–42. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-27077-2_18.
Der volle Inhalt der QuelleCrombie Smith, Kirsty, und William Edmondson. „The Development of a Computational Notation for Synthesis of Sign and Gesture“. In Gesture-Based Communication in Human-Computer Interaction, 312–23. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-24598-8_29.
Der volle Inhalt der QuelleKonferenzberichte zum Thema "Gesture Synthesis"
Bargmann, Robert, Volker Blanz und Hans-Peter Seidel. „A nonlinear viseme model for triphone-based speech synthesis“. In Gesture Recognition (FG). IEEE, 2008. http://dx.doi.org/10.1109/afgr.2008.4813362.
Der volle Inhalt der QuelleSargin, M. E., O. Aran, A. Karpov, F. Ofli, Y. Yasinnik, S. Wilson, E. Erzin, Y. Yemez und A. M. Tekalp. „Combined Gesture-Speech Analysis and Speech Driven Gesture Synthesis“. In 2006 IEEE International Conference on Multimedia and Expo. IEEE, 2006. http://dx.doi.org/10.1109/icme.2006.262663.
Der volle Inhalt der QuelleLu, Shuhong, Youngwoo Yoon und Andrew Feng. „Co-Speech Gesture Synthesis using Discrete Gesture Token Learning“. In 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2023. http://dx.doi.org/10.1109/iros55552.2023.10342027.
Der volle Inhalt der QuelleWang, Siyang, Simon Alexanderson, Joakim Gustafson, Jonas Beskow, Gustav Eje Henter und Éva Székely. „Integrated Speech and Gesture Synthesis“. In ICMI '21: INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION. New York, NY, USA: ACM, 2021. http://dx.doi.org/10.1145/3462244.3479914.
Der volle Inhalt der QuelleBreidt, Martin, Heinrich H. Biilthoff und Cristobal Curio. „Robust semantic analysis by synthesis of 3D facial motion“. In Gesture Recognition (FG 2011). IEEE, 2011. http://dx.doi.org/10.1109/fg.2011.5771336.
Der volle Inhalt der QuelleLiu, Kang, und Joern Ostermann. „Realistic head motion synthesis for an image-based talking head“. In Gesture Recognition (FG 2011). IEEE, 2011. http://dx.doi.org/10.1109/fg.2011.5771384.
Der volle Inhalt der QuelleGunes, Hatice, Bjorn Schuller, Maja Pantic und Roddy Cowie. „Emotion representation, analysis and synthesis in continuous space: A survey“. In Gesture Recognition (FG 2011). IEEE, 2011. http://dx.doi.org/10.1109/fg.2011.5771357.
Der volle Inhalt der QuelleLiu, Kang, und Joern Ostermann. „Realistic head motion synthesis for an image-based talking head“. In Gesture Recognition (FG 2011). IEEE, 2011. http://dx.doi.org/10.1109/fg.2011.5771401.
Der volle Inhalt der QuelleHan, Huijian, Rongjun Song und Yanqiang Fu. „One Algorithm of Gesture Animation Synthesis“. In 2016 12th International Conference on Computational Intelligence and Security (CIS). IEEE, 2016. http://dx.doi.org/10.1109/cis.2016.0091.
Der volle Inhalt der QuelleLee, Chan-Su, und Dimitris Samaras. „Analysis and synthesis of facial expressions using decomposable nonlinear generative models“. In Gesture Recognition (FG 2011). IEEE, 2011. http://dx.doi.org/10.1109/fg.2011.5771360.
Der volle Inhalt der Quelle