Artigos de revistas sobre o tema "Gesture Synthesis"
Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos
Veja os 50 melhores artigos de revistas para estudos sobre o assunto "Gesture Synthesis".
Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.
Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.
Veja os artigos de revistas das mais diversas áreas científicas e compile uma bibliografia correta.
Pang, Kunkun, Dafei Qin, Yingruo Fan, Julian Habekost, Takaaki Shiratori, Junichi Yamagishi e Taku Komura. "BodyFormer: Semantics-guided 3D Body Gesture Synthesis with Transformer". ACM Transactions on Graphics 42, n.º 4 (26 de julho de 2023): 1–12. http://dx.doi.org/10.1145/3592456.
Texto completo da fonteDeng, Linhai. "FPGA-based gesture recognition and voice interaction". Applied and Computational Engineering 40, n.º 1 (21 de fevereiro de 2024): 174–79. http://dx.doi.org/10.54254/2755-2721/40/20230646.
Texto completo da fonteAo, Tenglong, Qingzhe Gao, Yuke Lou, Baoquan Chen e Libin Liu. "Rhythmic Gesticulator". ACM Transactions on Graphics 41, n.º 6 (30 de novembro de 2022): 1–19. http://dx.doi.org/10.1145/3550454.3555435.
Texto completo da fonteYang, Qi, e Georg Essl. "Evaluating Gesture-Augmented Keyboard Performance". Computer Music Journal 38, n.º 4 (dezembro de 2014): 68–79. http://dx.doi.org/10.1162/comj_a_00277.
Texto completo da fonteSouza, Fernando, e Adolfo Maia Jr. "A Mathematical, Graphical and Visual Approach to Granular Synthesis Composition". Revista Vórtex 9, n.º 2 (10 de dezembro de 2021): 1–27. http://dx.doi.org/10.33871/23179937.2021.9.2.4.
Texto completo da fonteBouënard, Alexandre, Marcelo M. M. Wanderley e Sylvie Gibet. "Gesture Control of Sound Synthesis: Analysis and Classification of Percussion Gestures". Acta Acustica united with Acustica 96, n.º 4 (1 de julho de 2010): 668–77. http://dx.doi.org/10.3813/aaa.918321.
Texto completo da fonteHe, Zhiyuan. "Automatic Quality Assessment of Speech-Driven Synthesized Gestures". International Journal of Computer Games Technology 2022 (16 de março de 2022): 1–11. http://dx.doi.org/10.1155/2022/1828293.
Texto completo da fonteXu, Zunnan, Yachao Zhang, Sicheng Yang, Ronghui Li e Xiu Li. "Chain of Generation: Multi-Modal Gesture Synthesis via Cascaded Conditional Control". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 6 (24 de março de 2024): 6387–95. http://dx.doi.org/10.1609/aaai.v38i6.28458.
Texto completo da fonteFernández-Baena, Adso, Raúl Montaño, Marc Antonijoan, Arturo Roversi, David Miralles e Francesc Alías. "Gesture synthesis adapted to speech emphasis". Speech Communication 57 (fevereiro de 2014): 331–50. http://dx.doi.org/10.1016/j.specom.2013.06.005.
Texto completo da fonteNakano, Atsushi, e Junichi Hoshino. "Composite conversation gesture synthesis using layered planning". Systems and Computers in Japan 38, n.º 10 (2007): 58–68. http://dx.doi.org/10.1002/scj.20532.
Texto completo da fonteArfib, D., J. M. Couturier, L. Kessous e V. Verfaille. "Strategies of mapping between gesture data and synthesis model parameters using perceptual spaces". Organised Sound 7, n.º 2 (agosto de 2002): 127–44. http://dx.doi.org/10.1017/s1355771802002054.
Texto completo da fonteDang, Xiaochao, Wenze Ke, Zhanjun Hao, Peng Jin, Han Deng e Ying Sheng. "mm-TPG: Traffic Policemen Gesture Recognition Based on Millimeter Wave Radar Point Cloud". Sensors 23, n.º 15 (31 de julho de 2023): 6816. http://dx.doi.org/10.3390/s23156816.
Texto completo da fonteValencia, C. Roncancio, J. Gomez Garcia-Bermejo e E. Zalama Casanova. "Combined Gesture-Speech Recognition and Synthesis Using Neural Networks". IFAC Proceedings Volumes 41, n.º 2 (2008): 2968–73. http://dx.doi.org/10.3182/20080706-5-kr-1001.00499.
Texto completo da fonteMontgermont, Nicolas, Benoit Fabre e Patricio De La Cuadra. "Gesture synthesis: basic control of a flute physical model". Journal of the Acoustical Society of America 123, n.º 5 (maio de 2008): 3797. http://dx.doi.org/10.1121/1.2935477.
Texto completo da fonteAlexanderson, Simon, Gustav Eje Henter, Taras Kucherenko e Jonas Beskow. "Style‐Controllable Speech‐Driven Gesture Synthesis Using Normalising Flows". Computer Graphics Forum 39, n.º 2 (maio de 2020): 487–96. http://dx.doi.org/10.1111/cgf.13946.
Texto completo da fonteMo, Dong-Han, Chuen-Lin Tien, Yu-Ling Yeh, Yi-Ru Guo, Chern-Sheng Lin, Chih-Chin Chen e Che-Ming Chang. "Design of Digital-Twin Human-Machine Interface Sensor with Intelligent Finger Gesture Recognition". Sensors 23, n.º 7 (27 de março de 2023): 3509. http://dx.doi.org/10.3390/s23073509.
Texto completo da fonteYu, Shi Cai, e Rong Lu. "Research of Sign Language Synthesis Based on VRML". Applied Mechanics and Materials 347-350 (agosto de 2013): 2631–35. http://dx.doi.org/10.4028/www.scientific.net/amm.347-350.2631.
Texto completo da fonteLANZALONE, SILVIA. "Hidden grids: paths of expressive gesture between instruments, music and dance". Organised Sound 5, n.º 1 (abril de 2000): 17–26. http://dx.doi.org/10.1017/s1355771800001047.
Texto completo da fonteRyumin, Dmitry, Ildar Kagirov, Alexandr Axyonov, Nikita Pavlyuk, Anton Saveliev, Irina Kipyatkova, Milos Zelezny, Iosif Mporas e Alexey Karpov. "A Multimodal User Interface for an Assistive Robotic Shopping Cart". Electronics 9, n.º 12 (8 de dezembro de 2020): 2093. http://dx.doi.org/10.3390/electronics9122093.
Texto completo da fonteK, Kavyasree. "Hand Glide: Gesture-Controlled Virtual Mouse with Voice Assistant". International Journal for Research in Applied Science and Engineering Technology 12, n.º 4 (30 de abril de 2024): 5470–76. http://dx.doi.org/10.22214/ijraset.2024.61178.
Texto completo da fonteRasamimanana, Nicolas, Florian Kaiser e Frederic Bevilacqua. "Perspectives on Gesture–Sound Relationships Informed from Acoustic Instrument Studies". Organised Sound 14, n.º 2 (29 de junho de 2009): 208–16. http://dx.doi.org/10.1017/s1355771809000314.
Texto completo da fonteMARTIN, JEAN-CLAUDE, RADOSLAW NIEWIADOMSKI, LAURENCE DEVILLERS, STEPHANIE BUISINE e CATHERINE PELACHAUD. "MULTIMODAL COMPLEX EMOTIONS: GESTURE EXPRESSIVITY AND BLENDED FACIAL EXPRESSIONS". International Journal of Humanoid Robotics 03, n.º 03 (setembro de 2006): 269–91. http://dx.doi.org/10.1142/s0219843606000825.
Texto completo da fonteZhou, Yuxuan, Huangxun Chen, Chenyu Huang e Qian Zhang. "WiAdv". Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 6, n.º 2 (4 de julho de 2022): 1–25. http://dx.doi.org/10.1145/3534618.
Texto completo da fonteKetabdar, Hamed, Amin Haji-Abolhassani e Mehran Roshandel. "MagiThings". International Journal of Mobile Human Computer Interaction 5, n.º 3 (julho de 2013): 23–41. http://dx.doi.org/10.4018/jmhci.2013070102.
Texto completo da fonteThoret, Etienne, Mitsuko Aramaki, Charles Gondre, Sølvi Ystad e Richard Kronland-Martinet. "Eluding the Physical Constraints in a Nonlinear Interaction Sound Synthesis Model for Gesture Guidance". Applied Sciences 6, n.º 7 (30 de junho de 2016): 192. http://dx.doi.org/10.3390/app6070192.
Texto completo da fonteCamurri, Antonio, Giovanni De Poli, Anders Friberg, Marc Leman e Gualtiero Volpe. "The MEGA Project: Analysis and Synthesis of Multisensory Expressive Gesture in Performing Art Applications". Journal of New Music Research 34, n.º 1 (março de 2005): 5–21. http://dx.doi.org/10.1080/09298210500123895.
Texto completo da fonteBouënard, Alexandre, Marcelo M. Wanderley, Sylvie Gibet e Fabrice Marandola. "Virtual Gesture Control and Synthesis of Music Performances: Qualitative Evaluation of Synthesized Timpani Exercises". Computer Music Journal 35, n.º 3 (setembro de 2011): 57–72. http://dx.doi.org/10.1162/comj_a_00069.
Texto completo da fonteDeinega, Volodymyr. "Influence of Timbral Sound Coloring on the Evolution of the Conductor's Gesture". Часопис Національної музичної академії України ім.П.І.Чайковського, n.º 3(60) (27 de setembro de 2023): 85–97. http://dx.doi.org/10.31318/2414-052x.3(60).2023.296801.
Texto completo da fonteNichols, Charles. "The vBow: a virtual violin bow controller for mapping gesture to synthesis with haptic feedback". Organised Sound 7, n.º 2 (agosto de 2002): 215–20. http://dx.doi.org/10.1017/s135577180200211x.
Texto completo da fontePhukon, Debasish. "A Deep Learning Approach for ASL Recognition and Text-to-Speech Synthesis using CNN". International Journal for Research in Applied Science and Engineering Technology 11, n.º 8 (31 de agosto de 2023): 2135–43. http://dx.doi.org/10.22214/ijraset.2023.55528.
Texto completo da fonteHarrison, Reginald Langford, Stefan Bilbao, James Perry e Trevor Wishart. "An Environment for Physical Modeling of Articulated Brass Instruments". Computer Music Journal 39, n.º 4 (dezembro de 2015): 80–95. http://dx.doi.org/10.1162/comj_a_00332.
Texto completo da fontePolykretis, Ioannis, Aditi Patil, Mridul Aanjaneya e Konstantinos Michmizos. "An Interactive Framework for Visually Realistic 3D Motion Synthesis using Evolutionarily-trained Spiking Neural Networks". Proceedings of the ACM on Computer Graphics and Interactive Techniques 6, n.º 1 (12 de maio de 2023): 1–19. http://dx.doi.org/10.1145/3585509.
Texto completo da fonteWołk, Krzysztof, Agnieszka Wołk e Wojciech Glinkowski. "A Cross-Lingual Mobile Medical Communication System Prototype for Foreigners and Subjects with Speech, Hearing, and Mental Disabilities Based on Pictograms". Computational and Mathematical Methods in Medicine 2017 (2017): 1–9. http://dx.doi.org/10.1155/2017/4306416.
Texto completo da fonteQureshi, Regula Burckhardt. "Musical Gesture and Extra-Musical Meaning: Words and Music in the Urdu Ghazal". Journal of the American Musicological Society 43, n.º 3 (1990): 457–97. http://dx.doi.org/10.2307/831743.
Texto completo da fonteMoore, Carol-Lynne. "God geometricizes (and so does Laban)". Dance, Movement & Spiritualities 10, n.º 1 (1 de outubro de 2023): 73–90. http://dx.doi.org/10.1386/dmas_00047_1.
Texto completo da fonteHuang, Chih-Fang, e Wei-Po Nien. "A Study of the Integrated Automated Emotion Music with the Motion Gesture Synthesis via ZigBee Wireless Communication". International Journal of Distributed Sensor Networks 9, n.º 11 (janeiro de 2013): 645961. http://dx.doi.org/10.1155/2013/645961.
Texto completo da fonteLee, Donghee, Dayoung You, Gyoungryul Cho, Hoirim Lee, Eunsoo Shin, Taehwan Choi, Sunghan Kim, Sangmin Lee e Woochul Nam. "EMG-based hand gesture classifier robust to daily variation: Recursive domain adversarial neural network with data synthesis". Biomedical Signal Processing and Control 88 (fevereiro de 2024): 105600. http://dx.doi.org/10.1016/j.bspc.2023.105600.
Texto completo da fonteAlexanderson, Simon, Rajmund Nagy, Jonas Beskow e Gustav Eje Henter. "Listen, Denoise, Action! Audio-Driven Motion Synthesis with Diffusion Models". ACM Transactions on Graphics 42, n.º 4 (26 de julho de 2023): 1–20. http://dx.doi.org/10.1145/3592458.
Texto completo da fonteLi, Yuerong, Xingce Wang, Zhongke Wu, Guoshuai Li, Shaolong Liu e Mingquan Zhou. "Flexible indoor scene synthesis based on multi-object particle swarm intelligence optimization and user intentions with 3D gesture". Computers & Graphics 93 (dezembro de 2020): 1–12. http://dx.doi.org/10.1016/j.cag.2020.08.002.
Texto completo da fonteVan Nort, Doug, Marcelo M. Wanderley e Philippe Depalle. "Mapping Control Structures for Sound Synthesis: Functional and Topological Perspectives". Computer Music Journal 38, n.º 3 (setembro de 2014): 6–22. http://dx.doi.org/10.1162/comj_a_00253.
Texto completo da fonteSaafi, Houssem, Med Amine Laribi e Said Zeghloul. "Design of a 4-DoF (degree of freedom) hybrid-haptic device for laparoscopic surgery". Mechanical Sciences 12, n.º 1 (12 de fevereiro de 2021): 155–64. http://dx.doi.org/10.5194/ms-12-155-2021.
Texto completo da fonteLin, Xu, e Gao Wen. "Human-Computer Chinese Sign Language Interaction System". International Journal of Virtual Reality 4, n.º 3 (1 de janeiro de 2000): 82–92. http://dx.doi.org/10.20870/ijvr.2000.4.3.2651.
Texto completo da fonteChopparapu, SaiTeja, e Joseph Beatrice Seventline. "An Efficient Multi-modal Facial Gesture-based Ensemble Classification and Reaction to Sound Framework for Large Video Sequences". Engineering, Technology & Applied Science Research 13, n.º 4 (9 de agosto de 2023): 11263–70. http://dx.doi.org/10.48084/etasr.6087.
Texto completo da fonteDu, Chuan, Lei Zhang, Xiping Sun, Junxu Wang e Jialian Sheng. "Enhanced Multi-Channel Feature Synthesis for Hand Gesture Recognition Based on CNN With a Channel and Spatial Attention Mechanism". IEEE Access 8 (2020): 144610–20. http://dx.doi.org/10.1109/access.2020.3010063.
Texto completo da fonteChiu, Jih-Ching, Guan-Yi Lee, Chih-Yang Hsieh e Qing-You Lin. "Design and Implementation of Nursing-Secure-Care System with mmWave Radar by YOLO-v4 Computing Methods". Applied System Innovation 7, n.º 1 (19 de janeiro de 2024): 10. http://dx.doi.org/10.3390/asi7010010.
Texto completo da fonteMovchan, Larisa Anatol'evna. "The specifics of teaching the techniques of conducting gesture in the "Choral conducting" classroom". PHILHARMONICA. International Music Journal, n.º 1 (janeiro de 2024): 18–32. http://dx.doi.org/10.7256/2453-613x.2024.1.69898.
Texto completo da fonteDesvages, Charlotte, e Stefan Bilbao. "Two-Polarisation Physical Model of Bowed Strings with Nonlinear Contact and Friction Forces, and Application to Gesture-Based Sound Synthesis". Applied Sciences 6, n.º 5 (10 de maio de 2016): 135. http://dx.doi.org/10.3390/app6050135.
Texto completo da fonteRamos Flores, Cristohper. "The bodyless sound and the re-embodied sound: an expansion of the sonic body of the instrument". Ricercare, n.º 15 (21 de agosto de 2022): 30–58. http://dx.doi.org/10.17230/ricercare.2022.15.2.
Texto completo da fonteJokisch, Oliver, e Markus Huber. "Advances in the development of a cognitive user interface". MATEC Web of Conferences 161 (2018): 01003. http://dx.doi.org/10.1051/matecconf/201816101003.
Texto completo da fonteMagnenat-Thalmann, Nadia, e Arjan Egges. "Interactive Virtual Humans in Real-Time Virtual Environment". International Journal of Virtual Reality 5, n.º 2 (1 de janeiro de 2006): 15–24. http://dx.doi.org/10.20870/ijvr.2006.5.2.2682.
Texto completo da fonte