Literatura científica selecionada sobre o tema "Gesture Synthesis"

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Consulte a lista de atuais artigos, livros, teses, anais de congressos e outras fontes científicas relevantes para o tema "Gesture Synthesis".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Artigos de revistas sobre o assunto "Gesture Synthesis"

1

Pang, Kunkun, Dafei Qin, Yingruo Fan, Julian Habekost, Takaaki Shiratori, Junichi Yamagishi e Taku Komura. "BodyFormer: Semantics-guided 3D Body Gesture Synthesis with Transformer". ACM Transactions on Graphics 42, n.º 4 (26 de julho de 2023): 1–12. http://dx.doi.org/10.1145/3592456.

Texto completo da fonte
Resumo:
Automatic gesture synthesis from speech is a topic that has attracted researchers for applications in remote communication, video games and Metaverse. Learning the mapping between speech and 3D full-body gestures is difficult due to the stochastic nature of the problem and the lack of a rich cross-modal dataset that is needed for training. In this paper, we propose a novel transformer-based framework for automatic 3D body gesture synthesis from speech. To learn the stochastic nature of the body gesture during speech, we propose a variational transformer to effectively model a probabilistic distribution over gestures, which can produce diverse gestures during inference. Furthermore, we introduce a mode positional embedding layer to capture the different motion speeds in different speaking modes. To cope with the scarcity of data, we design an intra-modal pre-training scheme that can learn the complex mapping between the speech and the 3D gesture from a limited amount of data. Our system is trained with either the Trinity speech-gesture dataset or the Talking With Hands 16.2M dataset. The results show that our system can produce more realistic, appropriate, and diverse body gestures compared to existing state-of-the-art approaches.
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Deng, Linhai. "FPGA-based gesture recognition and voice interaction". Applied and Computational Engineering 40, n.º 1 (21 de fevereiro de 2024): 174–79. http://dx.doi.org/10.54254/2755-2721/40/20230646.

Texto completo da fonte
Resumo:
Human gestures, a fundamental trait, enable human-machine interactions and possibilities in interfaces. Amid technological advancements, gesture recognition research has gained prominence. Gesture recognition possesses merits in sample acquisition and intricate delineation. Delving into its nuances remains significant. Existing techniques leverage PC-based OpenCV and deep learnings computational prowess, showcasing complexity. This scholarly exposition outlines an experimental framework, centered on mobile FPGA for enhanced gesture recognition. The focus lies on DE2-115 as an image discernment base. A 51 microcontroller supports auditory synthesis. Specifically, this paper highlights the DE2-115 FPGA as the foundation for image discernment, while a 51 microcontroller assists in auditory synthesis. Our emphasis lies in recognizing basic gestures, particularly within the rock-paper-scissors taxonomy, to ensure precision and accuracy. This research underscores the potential of FPGA in enabling efficient gesture recognition on mobile platforms. As a result, the experiments conducted in this thesis successfully realize the recognition of simple gestures, such as the numbers 1, 2, 3, and 4, as well as rock-paper-scissors.
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Ao, Tenglong, Qingzhe Gao, Yuke Lou, Baoquan Chen e Libin Liu. "Rhythmic Gesticulator". ACM Transactions on Graphics 41, n.º 6 (30 de novembro de 2022): 1–19. http://dx.doi.org/10.1145/3550454.3555435.

Texto completo da fonte
Resumo:
Automatic synthesis of realistic co-speech gestures is an increasingly important yet challenging task in artificial embodied agent creation. Previous systems mainly focus on generating gestures in an end-to-end manner, which leads to difficulties in mining the clear rhythm and semantics due to the complex yet subtle harmony between speech and gestures. We present a novel co-speech gesture synthesis method that achieves convincing results both on the rhythm and semantics. For the rhythm, our system contains a robust rhythm-based segmentation pipeline to ensure the temporal coherence between the vocalization and gestures explicitly. For the gesture semantics, we devise a mechanism to effectively disentangle both low- and high-level neural embeddings of speech and motion based on linguistic theory. The high-level embedding corresponds to semantics, while the low-level embedding relates to subtle variations. Lastly, we build correspondence between the hierarchical embeddings of the speech and the motion, resulting in rhythm- and semantics-aware gesture synthesis. Evaluations with existing objective metrics, a newly proposed rhythmic metric, and human feedback show that our method outperforms state-of-the-art systems by a clear margin.
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Yang, Qi, e Georg Essl. "Evaluating Gesture-Augmented Keyboard Performance". Computer Music Journal 38, n.º 4 (dezembro de 2014): 68–79. http://dx.doi.org/10.1162/comj_a_00277.

Texto completo da fonte
Resumo:
The technology of depth cameras has made designing gesture-based augmentation for existing instruments inexpensive. We explored the use of this technology to augment keyboard performance with 3-D continuous gesture controls. In a user study, we compared the control of one or two continuous parameters using gestures versus the traditional control using pitch and modulation wheels. We found that the choice of mapping depends on the choice of synthesis parameter in use, and that the gesture control under suitable mappings can outperform pitch-wheel performance when two parameters are controlled simultaneously.
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Souza, Fernando, e Adolfo Maia Jr. "A Mathematical, Graphical and Visual Approach to Granular Synthesis Composition". Revista Vórtex 9, n.º 2 (10 de dezembro de 2021): 1–27. http://dx.doi.org/10.33871/23179937.2021.9.2.4.

Texto completo da fonte
Resumo:
We show a method for Granular Synthesis Composition based on a mathematical modeling for the musical gesture. Each gesture is drawn as a curve generated from a particular mathematical model (or function) and coded as a MATLAB script. The gestures can be deterministic through defining mathematical time functions, hand free drawn, or even randomly generated. This parametric information of gestures is interpreted through OSC messages by a granular synthesizer (Granular Streamer). The musical composition is then realized with the models (scripts) written in MATLAB and exported to a graphical score (Granular Score). The method is amenable to allow statistical analysis of the granular sound streams and the final music composition. We also offer a way to create granular streams based on correlated pair of grains parameters.
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Bouënard, Alexandre, Marcelo M. M. Wanderley e Sylvie Gibet. "Gesture Control of Sound Synthesis: Analysis and Classification of Percussion Gestures". Acta Acustica united with Acustica 96, n.º 4 (1 de julho de 2010): 668–77. http://dx.doi.org/10.3813/aaa.918321.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

He, Zhiyuan. "Automatic Quality Assessment of Speech-Driven Synthesized Gestures". International Journal of Computer Games Technology 2022 (16 de março de 2022): 1–11. http://dx.doi.org/10.1155/2022/1828293.

Texto completo da fonte
Resumo:
The automatic synthesis of realistic gestures has the ability to change the fields of animation, avatars, and communication agents. Although speech-driven synthetic gesture generation methods have been proposed and optimized, the evaluation system of synthetic gestures is still lacking. The current evaluation method still needs manual participation, but it is inefficient in the industry of synthetic gestures and has the interference of human factors. So we need a model that can construct an automatic and objective quantitative quality assessment of the synthesized gesture video. We noticed that recurrent neural networks (RNN) have advantages in modeling advanced spatiotemporal feature sequences, which are very suitable for use in the processing of synthetic gesture video data. Therefore, to build an automatic quality assessment system, we propose in our work a model based on Bi-LSTM and make a little adjustment on the attention mechanism in it. Also, the evaluation method is proposed and experiments are designed to prove that the improved model of the algorithm can complete the quantitative evaluation of synthetic gestures. At the same time, in terms of performance, the model has an improvement of about 20% compared to before the algorithm adjustment.
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Xu, Zunnan, Yachao Zhang, Sicheng Yang, Ronghui Li e Xiu Li. "Chain of Generation: Multi-Modal Gesture Synthesis via Cascaded Conditional Control". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 6 (24 de março de 2024): 6387–95. http://dx.doi.org/10.1609/aaai.v38i6.28458.

Texto completo da fonte
Resumo:
This study aims to improve the generation of 3D gestures by utilizing multimodal information from human speech. Previous studies have focused on incorporating additional modalities to enhance the quality of generated gestures. However, these methods perform poorly when certain modalities are missing during inference. To address this problem, we suggest using speech-derived multimodal priors to improve gesture generation. We introduce a novel method that separates priors from speech and employs multimodal priors as constraints for generating gestures. Our approach utilizes a chain-like modeling method to generate facial blendshapes, body movements, and hand gestures sequentially. Specifically, we incorporate rhythm cues derived from facial deformation and stylization prior based on speech emotions, into the process of generating gestures. By incorporating multimodal priors, our method improves the quality of generated gestures and eliminate the need for expensive setup preparation during inference. Extensive experiments and user studies confirm that our proposed approach achieves state-of-the-art performance.
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Fernández-Baena, Adso, Raúl Montaño, Marc Antonijoan, Arturo Roversi, David Miralles e Francesc Alías. "Gesture synthesis adapted to speech emphasis". Speech Communication 57 (fevereiro de 2014): 331–50. http://dx.doi.org/10.1016/j.specom.2013.06.005.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Nakano, Atsushi, e Junichi Hoshino. "Composite conversation gesture synthesis using layered planning". Systems and Computers in Japan 38, n.º 10 (2007): 58–68. http://dx.doi.org/10.1002/scj.20532.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.

Teses / dissertações sobre o assunto "Gesture Synthesis"

1

Faggi, Simone. "An Evaluation Model For Speech-Driven Gesture Synthesis". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/22844/.

Texto completo da fonte
Resumo:
The research and development of embodied agents with advanced relational capabilities is constantly evolving. In recent years, the development of behavioural signal generation models to be integrated in social robots and virtual characters, is moving from rule-based to data-driven approaches, requiring appropriate and reliable evaluation techniques. This work proposes a novel machine learning approach for the evaluation of speech-to-gestures models that is independent from the audio source. This approach enables the measurement of the quality of gestures produced by these models and provides a benchmark for their evaluation. Results show that the proposed approach is consistent with evaluations made through user studies and, furthermore, that its use allows for a reliable comparison of speech-to-gestures state-of-the-art models.
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Marrin, Nakra Teresa (Teresa Anne) 1970. "Inside the conductor's jacket : analysis, interpretation and musical synthesis of expressive gesture". Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/9165.

Texto completo da fonte
Resumo:
Thesis (Ph.D.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2000.
Includes bibliographical references (leaves 154-167).
We present the design and implementation of the Conductor's Jacket, a unique wearable device that measures physiological and gestural signals, together with the Gesture Construction, a musical software system that interprets these signals and applies them expressively in a musical context. Sixteen sensors have been incorporated into the Conductor's Jacket in such a way as to not encumber or interfere with the gestures of a working orchestra conductor. The Conductor's Jacket system gathers up to sixteen data channels reliably at rates of 3 kHz per channel, and also provides mcal-time graphical feedback. Unlike many gesture-sensing systems it not only gathers positional and accelerational data but also senses muscle tension from several locations on each arm. The Conductor's Jacket was used to gather conducting data from six subjects, three professional conductors and three students, during twelve hours of rehearsals and performances. Analyses of the data yielded thirty-five significant features that seem to reflect intuitive and natural gestural tendencies, including context-based hand switching, anticipatory 'flatlining' effects, and correlations between respiration and phrasing. The results indicate that muscle tension and respiration signals reflect several significant and expressive characteristics of a conductor's gestures. From these results we present nine hypotheses about human musical expression, including ideas about efficiency, intentionality, polyphony, signal-to-noise ratios, and musical flow state. Finally, this thesis describes the Gesture Construction, a musical software system that analyzes and performs music in real-time based on the performer's gestures and breathing signals. A bank of software filters extracts several of the features that were found in the conductor study, including beat intensities and the alternation between arms. These features are then used to generate real-time expressive effects by shaping the beats, tempos, articulations, dynamics, and note lengths in a musical score.
by Teresa Marrin Nakra.
Ph.D.
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Pun, James Chi-Him. "Gesture recognition with application in music arrangement". Diss., University of Pretoria, 2006. http://upetd.up.ac.za/thesis/available/etd-11052007-171910/.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Wang, Yizhong Johnty. "Investigation of gesture control for articulatory speech synthesis with a bio-mechanical mapping layer". Thesis, University of British Columbia, 2012. http://hdl.handle.net/2429/43193.

Texto completo da fonte
Resumo:
In the process of working with a real-time, gesture controlled speech and singing synthesizer used for musical performance, we have documented performer related issues and provided some suggestions that will serve to improve future work in the field from an engineering and technician's perspective. One particular, significant detrimental factor in the existing system is the sound quality caused by the limitations of the one-to-one kinematic mapping between the gesture input and output. In order to solve this a force activated bio-mechanical mapping layer was implemented to drive an articulatory synthesizer, and the results were and compared with the existing mapping system for the same task from both the performer and listener perspective. The results show that adding the complex, dynamic bio-mechanical mapping layer introduces more difficulty but allows a greater degree of expression to the performer that is consistent with existing work in the literature. However, to the novice listener, there is no significant difference in the intelligibility of the sound or the perceived quality. The results suggest that for browsing through a vowel space force and position input are comparable when considering output intelligibility alone but for expressivity a complex input may be more suitable.
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Pérez, Carrillo Alfonso Antonio. "Enhacing spectral sintesis techniques with performance gestures using the violin as a case study". Doctoral thesis, Universitat Pompeu Fabra, 2009. http://hdl.handle.net/10803/7264.

Texto completo da fonte
Resumo:
In this work we investigate new sound synthesis techniques for imitating musical instruments using the violin as a case study. It is a multidisciplinary research, covering several fields such as spectral modeling, machine learning, analysis of musical gestures or musical acoustics. It addresses sound production with a very empirical approach, based on the analysis of performance gestures as well as on the measurement of acoustical properties of the violin. Based on the characteristics of the main vibrating elements of the violin, we divide the study into two parts, namely bowed string and violin body sound radiation. With regard to the bowed string, we are interested in modeling the influence of bowing controls on the spectrum of string vibration. To accomplish this task we have developed a sensing system for accurate measurement of the bowing parameters. Analysis of real performances allows a better understanding of the bowing control space, its use by performers and its effect on the timbre of the sound produced. Besides, machine learning techniques are used to design a generative timbre model that is able to predict spectral envelopes corresponding to a sequence of bowing controls. These envelopes can then be filled with harmonic and noisy sound components to produce a synthetic string-vibration signal. In relation to the violin body, a new method formeasuring acoustical violin-body impulse responses has been conceived, based on bowed glissandi and a deconvolution algorithm of non-impulsive signals. Excitation is measured as string vibration and responses are recorded with multiple microphones placed at different angles around the violin, providing complete radiation patterns at all frequencies. Both the results of the bowed string and the violin body studies have been incorporated into a violin synthesizer prototype based on sample concatenation. Predicted envelopes of the timbre model are applied to the samples as a time-varying filter, which entails smoother concatenations and phrases that follow the nuances of the controlling gestures. These transformed samples are finally convolved with a body impulse response to recreate a realistic violin sound. The different impulse responses used can enhance the listening experience by simulating different violins, or effects such as stereo or violinist motion. Additionally, an expressivity model has been integrated into the synthesizer, adding expressive features such as timing deviations, dynamics or ornaments, thus augmenting the naturalness of the synthetic performances.
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Thoret, Etienne. "Caractérisation acoustique des relations entre les mouvements biologiques et la perception sonore : application au contrôle de la synthèse et à l'apprentissage de gestes". Thesis, Aix-Marseille, 2014. http://www.theses.fr/2014AIXM4780/document.

Texto completo da fonte
Resumo:
Cette thèse s'est intéressée aux relations entre les mouvements biologiques et la perception sonore en considérant le cas spécifique des mouvements graphiques et des sons de frottement qu'ils génèrent. L'originalité de ces travaux réside dans l'utilisation d'un modèle de synthèse sonore basé sur un principe perceptif issu de l'approche écologique de la perception et contrôlé par des modèles de gestes. Des stimuli sonores dont le timbre n'est modulé que par des variations de vitesse produites par un geste ont ainsi pu être générés permettant de se focaliser sur l'influence perceptive de cet invariant transformationel. Une première étude a ainsi montré que l'on reconnait la cinématique des mouvements biologiques (la loi en puissance 1/3), et que l'on peut discriminer des formes géométriques simples juste à partir des sons de frottement produits. Une seconde étude a montré l'existence de prototypes dynamiques sonores caractérisant les trajectoires elliptiques, mettant ainsi en évidence que les prototypes géométriques peuvent émerger d'un couplage sensorimoteur. Enfin, une dernière étude a montré qu'une cinématique évoquée par un sonore influence significativement la cinématique et la géométrie d'un geste dans une tâche de reproduction graphique du mouvement d'un point lumineux. Ce résultat révèle l'importance de la modalité auditive dans l'intégration multisensorielle des mouvements continus dans une situation jamais explorée. Ces résultats ont permis le contrôle de modèles de synthèse par des descriptions gestuelles et la création d'outils de sonification pour l'apprentissage de gestes et la réhabilitation d'une pathologie motrice, la dysgraphie
This thesis focused on the relations between biological movements and auditory perception in considering the specific case of graphical movements and the friction sounds they produced. The originality of this work lies in the use of sound synthesis processes that are based on a perceptual paradigm and that can be controlled by gesture models. The present synthesis model made it possible to generate acoustic stimuli which timbre was directly modulated by the velocity variations induced by a graphic gesture in order to exclusively focus on the perceptual influence of this transformational invariant. A first study showed that we can recognize the biological motion kinematics (the 1/3 power law) and discriminate simple geometric shapes simply by listening to the timbre variations of friction sounds that solely evoke velocity variations. A second study revealed the existence of dynamic prototypes characterized by sounds corresponding to the most representative elliptic trajectory, thus revealing that prototypical shapes may emerged from sensorimotor coupling. A final study showed that the kinematics evoked by friction sounds may significantly affect the dynamic and geometric dimension in the visuo-motor coupling. This shed critical light on the relevance of auditory perception in the multisensory integration of continuous motion in a situation never explored. All of these theoretical results enabled the gestural control of sound synthesis models from a gestural description and the creation of sonification tools for gesture learning and rehabilitation of a graphomotor disease, dysgraphia
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Devaney, Jason Wayne. "A study of articulatory gestures for speech synthesis". Thesis, University of Liverpool, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.284254.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Métois, Eric. "Musical sound information : musical gestures and embedding synthesis". Thesis, Massachusetts Institute of Technology, 1997. http://hdl.handle.net/1721.1/29125.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Vigliensoni, Martin Augusto. "Touchless gestural control of concatenative sound synthesis". Thesis, McGill University, 2011. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=104846.

Texto completo da fonte
Resumo:
This thesis presents research on three-dimensional position tracking technologies used to control concatenative sound synthesis and applies the achieved research results to the design of a new immersive interface for musical expression. The underlying concepts and characteristics of position tracking technologies are reviewed and musical applications using these technologies are surveyed to exemplify their use. Four position tracking systems based on different technologies are empirically compared according to their performance parameters, technical specifications, and practical considerations of use. Concatenative sound synthesis, a corpus-based synthesis technique grounded on the segmentation, analysis and concatenation of sound units, is discussed. Three implementations of this technique are compared according to the characteristics of the main components involved in the architecture of these systems. Finally, this thesis introduces SoundCloud, an implementation that extends the interaction possibilities of one of the concatenative synthesis systems reviewed, providing a novel visualisation application. SoundCloud allows a musician to perform with a database of sounds distributed in a three-dimensional descriptor space by exploring a performance space with her hands.
Ce mémoire de thèse présente une nouvelle interface pour l'expression musicale combinant la synthèse sonore par concaténation et les technologies de captation de mouvements dans l'espace. Ce travail commence par une présentation des dispositifs de capture de position de type main-libre, en étudiant leur principes de fonctionnement et leur caractéristiques. Des exemples de leur application dans les contextes musicaux sont aussi étudiés. Une attention toute particulière est accordée à quatre systèmes: leurs spécifications techniques ainsi que leurs performances (évaluées par des métriques quantitatives) sont comparées expérimentalement. Ensuite, la synthèse concaténative est décrite. Cette technique de synthèse sonore consiste à synthéthiser une séquence musicale cible à partir de sons pré-enregistrés, sélectionnés et concaténés en fonction de leur adéquation avec la cible. Trois implémentations de cette technique sont comparées, permettant ainsi d'en choisir une pour notre application. Enfin, nous décrivons SoundCloud, une nouvelle interface qui, en ajoutant une interface visuelle à la méthode de synthèse concaténative, permet d'en étendre les possibilités de contrôle. SoundCloud permet en effet de contrôler la synthése de sons en utilisant des gestes libres des mains pour naviguer au sein d'un espace tridimensionnel de descripteurs des sons d'une base de données.
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Maestre, Gómez Esteban. "Modeling instrumental gestures: an analysis/synthesis framework for violin bowing". Doctoral thesis, Universitat Pompeu Fabra, 2009. http://hdl.handle.net/10803/7562.

Texto completo da fonte
Resumo:
Aquest treball presenta una metodologia per modelar el gest instrumental en la interpretació amb instruments musicals d'excitació contínua. En concret, la tesi tracta el control d'arc en interpretació clàssica de violí. S'hi introdueixen tècniques de mesura que presenten baixa intrusivitat, i són aplicades per a l'adquisició de senyals de paràmetres de control d'arc relacionats amb el timbre del so, i per a la construcció d'una base de dades la interpretació. Mitjançant la definició d'un vocabulari d'envolupants, es fan servir seqüències de corbes paramètriques de Bézier per modelar els contorns de velocitat de l'arc, força aplicada a l'arc, i distància entre l'arc i el pont del violí. Així, s'obté una parametrització que permet reconstruir els contorns originals amb robustesa i fidelitat. A partir de la parametrització dels contorns continguts a la base de dades, es construeix un model estadístic per l'anàlisi i la síntesi d'envolupants de paràmetres de control d'arc. Aquest model permet un mapeig flexible entre anotacions de partitura i envolupants. L'entorn de modelat es fa servir per generar contorns sintétics a partir d'una representació textual de la partitura, mitjançant un algorisme de planificació de l'ús d'arc capaç de reproduir les limitacions imposades per les dimensions físiques de l'arc. Els paràmetres de control sintetitzats s'utilitzen amb éxit per generar interpretacions artificials de violí fent servir dues de les técniques de síntesi de so més exteses: models físics basats en guies digitals d'ona, i síntesi basada en mostres.
This work presents a methodology for modeling instrumental gestures in excitation-continuous musical instruments. In particular, it approaches bowing control in violin classical performance. Nearly non-intrusive sensing techniques are introduced and applied for accurately acquiring relevant timbre-related bowing control parameter signals and constructing a performance database. By defining a vocabulary of bowing parameter envelopes, the contours of bow velocity, bow pressing force, and bow-bridge distance are modeled as sequences of Bézier cubic curve segments, yielding a robust parameterization that is well suited for reconstructing original contours with significant fidelity. An analysis/synthesis statistical modeling framework is constructed from a database of parameterized contours of bowing controls, enabling a flexible mapping between score annotations and bowing parameter envelopes. The framework is used for score-based generation of synthetic bowing parameter contours through a bow planning algorithm able to reproduce possible constraints imposed by the finite length of the bow. Rendered bowing control signals are successfully applied to automatic performance by being used for driving offline violin sound generation through two of the most extended techniques: digital waveguide physical modeling, and sample-based synthesis.
Estilos ABNT, Harvard, Vancouver, APA, etc.

Livros sobre o assunto "Gesture Synthesis"

1

Bernstein, Zachary. Thinking In and About Music. Oxford University Press, 2021. http://dx.doi.org/10.1093/oso/9780190949235.001.0001.

Texto completo da fonte
Resumo:
Milton Babbitt (1916–2011) was, at once, one of the century’s foremost composers and a founder of American music theory. These two aspects of his creative life—“thinking in” and “thinking about” music, as he would put it—nourished each other. Theory and analysis inspired fresh compositional ideas, and compositional concerns focused theoretical and analytical inquiry. Accordingly, this book undertakes an excavation of the sources of his theorizing as a guide to analysis of his music. Babbitt’s idiosyncratic synthesis of ideas from Heinrich Schenker, analytic philosophy, and cognitive science—at least as much as more obviously relevant, and more frequently cited, predecessors such as Arnold Schoenberg—provide insight into his aesthetics and compositional technique. Examination of Babbitt’s newly available sketch materials sheds additional light on his procedures. But a close look at his music reveals a host of concerns unaccounted for in his theories, some of which seem to directly contradict theoretical expectations. New analytical models are needed to complement those suggested by Babbitt’s theories. Departing from the serial logic of Babbitt’s writings, his compositional procedures, and most previous work on the subject—and in an attempt to discuss Babbitt’s music as it is actually heard rather than just deciphered—the book brings to bear theories of gesture and embodiment, rhetoric, text setting, and temporality. The result is a richly multifaceted look at one of the twentieth century’s most fascinating musical minds.
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Bennett, Christopher. Grace, Freedom, and the Expression of Emotion. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780198766858.003.0010.

Texto completo da fonte
Resumo:
Schiller’s discussion of the expression of emotion takes place in the context of his arguments for the importance of grace. The expressions of emotion that help to constitute grace, on Schiller’s view, are neither physiological changes that accompany emotion, nor expressions in art, but rather gestures. Schiller notices that actions like this pose a problem for what he takes to be an attractive, Kantian conception of freedom. Schiller accepts that action out of emotion cannot be explained simply mechanistically, and accepts the Kantian conception of freedom as spontaneity; but he breaks new ground in asking how that view is to be reconciled with the fact that sometimes we act expressively, out of emotion. His answer has to do with the synthesis of the different sides of human nature in grace. The ideal of grace is an ideal where we act expressively yet freely.
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Silver, Morris. Economic Structures of Antiquity. Greenwood Publishing Group, Inc., 1995. http://dx.doi.org/10.5040/9798400643606.

Texto completo da fonte
Resumo:
The economy of the ancient Middle East and Greece is reinterpreted by Morris Silver in this provocative new synthesis. Silver finds that the ancient economy emerges as a class of economies with its own laws of motion shaped by transaction costs (the resources used up in exchanging ownership rights). The analysis of transaction costs provides insights into many characteristics of the ancient economy, such as the important role of the sacred and symbolic gestures in making contracts, magical technology, the entrepreneurial role of high-born women, the elevation of familial ties and other departures from impersonal economics, reliance on slavery and adoption, and the insatiable drive to accumulate trust-capital. The peculiar behavior patterns and mindsets of ancient economic man are shown to be facilitators of economic growth. In recent years, our view of the economy of the ancient world has been shaped by the theories of Karl Polanyi. Silver confronts Polanyi's empirical propositions with the available evidence and demonstrates that antiquity knew active and sophisticated markets. In the course of providing an alternative analytical framework for studying the ancient economy, Silver gives critical attention to the economic views of the Assyriologists I.M. Diakonoff, W.F. Leemans, Mario Liverani, and J.N. Postgate; of the Egyptologists Jacob J. Janssen and Wolfgang Helck; and of the numerous followers of Moses Finley. Silver convincingly demonstrates that the ancient world was not static: periods of pervasive economic regulation by the state are interspersed with lengthy periods of relatively unfettered market activity, and the economies of Sumer, Babylonia, and archaic Greece were capable of transforming themselves in order to take advantage of new opportunities. This new synthesis is essential reading for economic historians and researchers of the ancient Near East and Greece.
Estilos ABNT, Harvard, Vancouver, APA, etc.

Capítulos de livros sobre o assunto "Gesture Synthesis"

1

Losson, Olivier, e Jean-Marc Vannobel. "Sign Specification and Synthesis". In Gesture-Based Communication in Human-Computer Interaction, 239–51. Berlin, Heidelberg: Springer Berlin Heidelberg, 1999. http://dx.doi.org/10.1007/3-540-46616-9_21.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Neff, Michael. "Hand Gesture Synthesis for Conversational Characters". In Handbook of Human Motion, 2201–12. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-14418-4_5.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Neff, Michael. "Hand Gesture Synthesis for Conversational Characters". In Handbook of Human Motion, 1–12. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-30808-1_5-1.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Olivier, Patrick. "Gesture Synthesis in a Real-World ECA". In Lecture Notes in Computer Science, 319–22. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-24842-2_35.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Wachsmuth, Ipke, e Stefan Kopp. "Lifelike Gesture Synthesis and Timing for Conversational Agents". In Gesture and Sign Language in Human-Computer Interaction, 120–33. Berlin, Heidelberg: Springer Berlin Heidelberg, 2002. http://dx.doi.org/10.1007/3-540-47873-6_13.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Hartmann, Björn, Maurizio Mancini e Catherine Pelachaud. "Implementing Expressive Gesture Synthesis for Embodied Conversational Agents". In Lecture Notes in Computer Science, 188–99. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11678816_22.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Julliard, Frédéric, e Sylvie Gibet. "Reactiva’Motion Project: Motion Synthesis Based on a Reactive Representation". In Gesture-Based Communication in Human-Computer Interaction, 265–68. Berlin, Heidelberg: Springer Berlin Heidelberg, 1999. http://dx.doi.org/10.1007/3-540-46616-9_23.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Arfib, Daniel, e Loïc Kessous. "Gestural Control of Sound Synthesis and Processing Algorithms". In Gesture and Sign Language in Human-Computer Interaction, 285–95. Berlin, Heidelberg: Springer Berlin Heidelberg, 2002. http://dx.doi.org/10.1007/3-540-47873-6_30.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Zhang, Fan, Naye Ji, Fuxing Gao e Yongping Li. "DiffMotion: Speech-Driven Gesture Synthesis Using Denoising Diffusion Model". In MultiMedia Modeling, 231–42. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-27077-2_18.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Crombie Smith, Kirsty, e William Edmondson. "The Development of a Computational Notation for Synthesis of Sign and Gesture". In Gesture-Based Communication in Human-Computer Interaction, 312–23. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-24598-8_29.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.

Trabalhos de conferências sobre o assunto "Gesture Synthesis"

1

Bargmann, Robert, Volker Blanz e Hans-Peter Seidel. "A nonlinear viseme model for triphone-based speech synthesis". In Gesture Recognition (FG). IEEE, 2008. http://dx.doi.org/10.1109/afgr.2008.4813362.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Sargin, M. E., O. Aran, A. Karpov, F. Ofli, Y. Yasinnik, S. Wilson, E. Erzin, Y. Yemez e A. M. Tekalp. "Combined Gesture-Speech Analysis and Speech Driven Gesture Synthesis". In 2006 IEEE International Conference on Multimedia and Expo. IEEE, 2006. http://dx.doi.org/10.1109/icme.2006.262663.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Lu, Shuhong, Youngwoo Yoon e Andrew Feng. "Co-Speech Gesture Synthesis using Discrete Gesture Token Learning". In 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2023. http://dx.doi.org/10.1109/iros55552.2023.10342027.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Wang, Siyang, Simon Alexanderson, Joakim Gustafson, Jonas Beskow, Gustav Eje Henter e Éva Székely. "Integrated Speech and Gesture Synthesis". In ICMI '21: INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION. New York, NY, USA: ACM, 2021. http://dx.doi.org/10.1145/3462244.3479914.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Breidt, Martin, Heinrich H. Biilthoff e Cristobal Curio. "Robust semantic analysis by synthesis of 3D facial motion". In Gesture Recognition (FG 2011). IEEE, 2011. http://dx.doi.org/10.1109/fg.2011.5771336.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Liu, Kang, e Joern Ostermann. "Realistic head motion synthesis for an image-based talking head". In Gesture Recognition (FG 2011). IEEE, 2011. http://dx.doi.org/10.1109/fg.2011.5771384.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Gunes, Hatice, Bjorn Schuller, Maja Pantic e Roddy Cowie. "Emotion representation, analysis and synthesis in continuous space: A survey". In Gesture Recognition (FG 2011). IEEE, 2011. http://dx.doi.org/10.1109/fg.2011.5771357.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Liu, Kang, e Joern Ostermann. "Realistic head motion synthesis for an image-based talking head". In Gesture Recognition (FG 2011). IEEE, 2011. http://dx.doi.org/10.1109/fg.2011.5771401.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Han, Huijian, Rongjun Song e Yanqiang Fu. "One Algorithm of Gesture Animation Synthesis". In 2016 12th International Conference on Computational Intelligence and Security (CIS). IEEE, 2016. http://dx.doi.org/10.1109/cis.2016.0091.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Lee, Chan-Su, e Dimitris Samaras. "Analysis and synthesis of facial expressions using decomposable nonlinear generative models". In Gesture Recognition (FG 2011). IEEE, 2011. http://dx.doi.org/10.1109/fg.2011.5771360.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!

Vá para a bibliografia