Littérature scientifique sur le sujet « Agents multimodaux »
Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres
Sommaire
Consultez les listes thématiques d’articles de revues, de livres, de thèses, de rapports de conférences et d’autres sources académiques sur le sujet « Agents multimodaux ».
À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.
Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.
Articles de revues sur le sujet "Agents multimodaux"
Lisetti, C. « Le paradigme MAUI pour des agents multimodaux d'interface homme-machine socialement intelligents ». Revue d'intelligence artificielle 20, no 4-5 (1 octobre 2006) : 583–606. http://dx.doi.org/10.3166/ria.20.583-606.
Texte intégralPELACHAUD, CATHERINE, et ISABELLA POGGI. « Multimodal embodied agents ». Knowledge Engineering Review 17, no 2 (juin 2002) : 181–96. http://dx.doi.org/10.1017/s0269888902000218.
Texte intégralAgarwal, Sanchit, Jan Jezabek, Arijit Biswas, Emre Barut, Bill Gao et Tagyoung Chung. « Building Goal-Oriented Dialogue Systems with Situated Visual Context ». Proceedings of the AAAI Conference on Artificial Intelligence 36, no 11 (28 juin 2022) : 13149–51. http://dx.doi.org/10.1609/aaai.v36i11.21710.
Texte intégralFrullano, Luca, et Thomas J. Meade. « Multimodal MRI contrast agents ». JBIC Journal of Biological Inorganic Chemistry 12, no 7 (21 juillet 2007) : 939–49. http://dx.doi.org/10.1007/s00775-007-0265-3.
Texte intégralRelyea, Robert, Darshan Bhanushali, Abhishek Vashist, Amlan Ganguly, Andres Kwasinski, Michael E. Kuhl et Raymond Ptucha. « Multimodal Localization for Autonomous Agents ». Electronic Imaging 2019, no 7 (13 janvier 2019) : 451–1. http://dx.doi.org/10.2352/issn.2470-1173.2019.7.iriacv-451.
Texte intégralNISHIMURA, YOSHITAKA, KAZUTAKA KUSHIDA, HIROSHI DOHI, MITSURU ISHIZUKA, JOHANE TAKEUCHI, MIKIO NAKANO et HIROSHI TSUJINO. « DEVELOPMENT OF MULTIMODAL PRESENTATION MARKUP LANGUAGE MPML-HR FOR HUMANOID ROBOTS AND ITS PSYCHOLOGICAL EVALUATION ». International Journal of Humanoid Robotics 04, no 01 (mars 2007) : 1–20. http://dx.doi.org/10.1142/s0219843607000947.
Texte intégralZhang, Zongren, Kexian Liang, Sharon Bloch, Mikhail Berezin et Samuel Achilefu. « Monomolecular Multimodal Fluorescence-Radioisotope Imaging Agents ». Bioconjugate Chemistry 16, no 5 (septembre 2005) : 1232–39. http://dx.doi.org/10.1021/bc050136s.
Texte intégralTaroni, Andrea. « Multimodal contrast agents combat cardiovascular disease ». Materials Today 11, no 11 (novembre 2008) : 13. http://dx.doi.org/10.1016/s1369-7021(08)70232-3.
Texte intégralKopp, Stefan, et Ipke Wachsmuth. « Synthesizing multimodal utterances for conversational agents ». Computer Animation and Virtual Worlds 15, no 1 (mars 2004) : 39–52. http://dx.doi.org/10.1002/cav.6.
Texte intégralEbling, Ângelo Augusto, et Sylvio Péllico Netto. « MODELAGEM DE OCORRÊNCIA DE COORTES NA ESTRUTURA DIAMÉTRICA DA Araucaria angustifolia (Bertol.) Kuntze ». CERNE 21, no 2 (juin 2015) : 251–57. http://dx.doi.org/10.1590/01047760201521111667.
Texte intégralThèses sur le sujet "Agents multimodaux"
Chaker, Walid. « Modélisation multi-échelle d'environnements urbains peuplés : application aux simulations multi-agents des déplacements multimodaux ». Thesis, Université Laval, 2009. http://www.theses.ulaval.ca/2009/26481/26481.pdf.
Texte intégralAbrilian, Sarkis. « Représentation de comportements emotionnels multimodaux spontanés : perception, annotation et synthèse ». Phd thesis, Université Paris Sud - Paris XI, 2007. http://tel.archives-ouvertes.fr/tel-00620827.
Texte intégralGallouedec, Quentin. « Toward the generalization of reinforcement learning ». Electronic Thesis or Diss., Ecully, Ecole centrale de Lyon, 2024. http://www.theses.fr/2024ECDL0013.
Texte intégralConventional Reinforcement Learning (RL) involves training a unimodal agent on a single, well-defined task, guided by a gradient-optimized reward signal. This framework does not allow us to envisage a learning agent adapted to real-world problems involving diverse modality streams, multiple tasks, often poorly defined, sometimes not defined at all. Hence, we advocate for transitioning towards a more general framework, aiming to create RL algorithms that more inherently versatile.To advance in this direction, we identify two primary areas of focus. The first aspect involves improving exploration, enabling the agent to learn from the environment with reduced dependence on the reward signal. We present Latent Go-Explore (LGE), an extension of the Go-Explore algorithm. While Go-Explore achieved impressive results, it was constrained by domain-specific knowledge. LGE overcomes these limitations, offering wider applicability within a general framework. In various tested environments, LGE consistently outperforms the baselines, showcasing its enhanced effectiveness and versatility. The second focus is to design a general-purpose agent that can operate in a variety of environments, thus involving a multimodal structure and even transcending the conventional sequential framework of RL. We introduce Jack of All Trades (JAT), a multimodal Transformer-based architecture uniquely tailored to sequential decision tasks. Using a single set of weights, JAT demonstrates robustness and versatility, competing its unique baseline on several RL benchmarks and even showing promising performance on vision and textual tasks. We believe that these two contributions are a valuable step towards a more general approach to RL. In addition, we present other methodological and technical advances that are closely related to our core research question. The first is the introduction of a set of sparsely rewarded simulated robotic environments designed to provide the community with the necessary tools for learning under conditions of low supervision. Notably, three years after its introduction, this contribution has been widely adopted by the community and continues to receive active maintenance and support. On the other hand, we present Open RL Benchmark, our pioneering initiative to provide a comprehensive and fully tracked set of RL experiments, going beyond typical data to include all algorithm-specific and system metrics. This benchmark aims to improve research efficiency by providing out-of-the-box RL data and facilitating accurate reproducibility of experiments. With its community-driven approach, it has quickly become an important resource, documenting over 25,000 runs.These technical and methodological advances, along with the scientific contributions described above, are intended to promote a more general approach to Reinforcement Learning and, we hope, represent a meaningful step toward the eventual development of a more operative RL agent
Kamoun, Mohamed. « Conception d’un système d’information pour l’aide au déplacement multimodal : une approche multi-agents pour la recherche et la composition des itinéraires en ligne ». Ecole Centrale de Lille, 2007. http://tel.archives-ouvertes.fr/docs/00/14/28/46/PDF/these_kamoun.pdf.
Texte intégralTo plan his travel, a traveller has to consult several web sites of different public transport operators. To avoid this time consuming task, this work consists in conceiving a Mobility Cooperative information system (SICM) providing a multi-modal and a multi-operators travel information. This integration system automates the itineraries search and the multi-operators routes composition. Its design is based on the multiagent system theory (MAS). The SICM tries, in fact, to make the existing operators’ information systems cooperating efficiently together, so that it can provide users with the optimized route to follow, by compiling the needed information from the different operators information sources. In this approach, the SICM is a middleware which becomes a customer among other users of the existing information systems. It can be considered as a mediator between the various distributed information sources on the one hand and the travellers on the other hand. The system should be able, at the same time, to find the needed information sources which are able to answer an itinerary request, and to gather this information in a coherent way to compose an optimized itinerary. To provide an optimized route, according to the criterion of the user, distributed and time-dependent shortest path algorithms were adopted and adapted to realize an on-line itinerary composition
Bangalore, Kantharaju Reshmashree. « Modelling Cohesive Behaviours for Virtual Agents in Multiparty Interactions ». Electronic Thesis or Diss., Sorbonne université, 2021. http://www.theses.fr/2021SORUS230.
Texte intégralGroup interactions are a commonly used form of communication among humans. Often the members of a group are involved in discussing, making decisions and exchanging ideas, under different settings (e.g., meeting, conference, party etc.). Group Cohesion describes the shared bond that drives the members to stay together and to want to work together to achieve group goals. In group interactions, humans communicate and coordinate with each other via a number of verbal and nonverbal behaviours. In this research work, as a first step we recognise the relation between group cohesion and certain non-verbal social signals of interest. Next, we present the results on automatic estimation of cohesion levels in groups using different features and feature representation techniques for groups. Virtual agents, a computer-generated animated character with human-like non-verbal behaviours, have been widely used for human-computer interactions in various applications e.g., educational agents, health coaches, training assistants etc. Most of the applications so far have focused on developing agents for dyadic interactions i.e., a single agent and user. A group of agents (multiparty) can be potentially effective in persuading, motivating and educating the users through interactive discussions. In the next step, we develop a multiparty model involving multiple autonomous agents that are capable of displaying cohesive group behaviour i.e., shared commitment to group tasks and positive relationship among the agents. Considering the surge in the range of applications using virtual agents, it is important to study the interactions between multiple agents and the user and understand the effects of using such a system. We hypothesise that the use of a multi-agent system would allow the user to be more engaged in the discussion and provide different perspectives on the same issue and facilitate the users to make informed decisions. Therefore, in the final step we conduct multiple user evaluation studies to understand the effects of multiparty interactions on the user and their perceptions e.g., the level of trust, persuasion. We present insights into the most effective form of interactions for promoting behaviour change or persuading the user using different group conversational topics. To summarise, in this thesis we recognise the association between certain non-verbal social signals and group cohesion, present the estimation accuracy using features extracted from these signals, develop a multiparty model to simulate a cohesive group of agents displaying prominent social signals and finally evaluate the effectiveness of such a model in the context of behaviour change and its effects on user’s perceptions
Fragoso, Ygara Lúcia Souza Melo. « Guibuilder multimodal : um framework para a geração de interfaces multimodais com o apoio de interaction design patterns ». Universidade Federal de São Carlos, 2012. https://repositorio.ufscar.br/handle/ufscar/7641.
Texte intégralApproved for entry into archive by Ronildo Prado (ronisp@ufscar.br) on 2016-10-04T18:00:12Z (GMT) No. of bitstreams: 1 DissYLSM.pdf: 5457505 bytes, checksum: 5b644703e5553e5b7a06e6ad6e976868 (MD5)
Approved for entry into archive by Ronildo Prado (ronisp@ufscar.br) on 2016-10-04T18:00:21Z (GMT) No. of bitstreams: 1 DissYLSM.pdf: 5457505 bytes, checksum: 5b644703e5553e5b7a06e6ad6e976868 (MD5)
Made available in DSpace on 2016-10-04T18:13:36Z (GMT). No. of bitstreams: 1 DissYLSM.pdf: 5457505 bytes, checksum: 5b644703e5553e5b7a06e6ad6e976868 (MD5) Previous issue date: 2012-11-01
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
The interaction between humans and the computers has improved substantially during time through the evolution of interfaces of interaction. The possibility of users to interact with machines through several modalities of communication, and in a natural way, can increase the level of interest from the user and ensure the success of the application. However, the literature of the area of multimodality has shown that developing such interfaces is not a simple task, mainly for non-experienced or recently graduated professionals, since each designer’s modality of interaction has its complexity in technical terms, as acquisition and adaptation with new tools, languages, possible actions and etc. Moreover it is necessary to verify which modalities (voice, touch and gestures) can be used in the application, how to combine these modalities in a way that the stronger point of one completes the weak point of the other and vice versa, and also knowing in what context the final user will be involved. The GuiBuilder Multimodal was developed aiming to try providing the basic needs in implementing an interface that uses voice, touch and gesture. The framework promotes an interface development through the WYSIWYG (What You See Is What You Get) model, where the designer just sets some parameters so the component is multimodal. During the interface creation phase, agents supervise what the designer does and supply support, clues, with design patterns that might be divided in categories such as: multimodality, interaction with the user and components.
A interação entre humano e computador tem melhorado substancialmente ao longo do tempo através da evolução das interfaces de interação. A possibilidade de usuários interagirem com máquinas através de várias modalidades de comunicação, e de forma natural, pode aumentar o nível de interesse do usuário e garantir o sucesso da aplicação. Porém, o estado da arte da área de multimodalidade demonstra que desenvolver tais interfaces não é uma tarefa simples, principalmente para projetistas inexperientes ou recém formados, pois cada modalidade de interação tem sua complexidade em termos técnicos, como aquisição e adaptação com novas ferramentas, linguagens, ações possíveis e etc. Além disso, é preciso verificar quais modalidades (voz, toque e gestos) podem ser usadas na aplicação, como combinar essas modalidades de forma que o ponto forte de uma complemente o ponto fraco da outra e vice-versa e também saber em qual contexto o usuário final estará inserido. O GuiBuilder Multimodal foi desenvolvido com o intuito de tentar suprir as necessidades básicas em se implementar uma interface que utiliza voz, toque e gesto. O framework promove um desenvolvimento de interface através do modelo WYSIWYG (What You See Is What You Get) onde o projetista apenas define alguns parâmetros para que um componente seja multimodal. Durante a fase de criação da interface agentes supervisionam o que o designer faz e fornece um apoio, dicas, com padrões de projeto que podem ser divididos em categorias como: multimodalidade, interação com o usuário e componentes.
Bendal, Ove-Andre. « Integration of multimodal input by using agents ». Thesis, Norwegian University of Science and Technology, Department of Computer and Information Science, 2005. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-9251.
Texte intégralToday, user interfaces normally consist of a screen, and a pointing device and a keyboard for input. However, as more advanced technology and methods appears, there should be good chances to utilize these for more natural and effective human-computer interfaces. The main motivation is to get a more natural and easy to use interface, and the computer should understand the user without too much effort from the user. Intelligent interfaces could be a solution to achieve this goal. The main focus in this thesis, is multimodal input which combines different input modalities to achieve the user's goal. A framework has been designed where the user has the possibility to change between input modalities. The system should integrate the information given in different input modalities to one joint meaning. In this architecture, input could either be location or command input, and different modalities could be used for each input type. The example described later on in this thesis combines either speech or written text as command input, with either map input or physical position for location input. An agent-based blackboard architecture are used for collecting input. Agents collect information directly from the user. Each agent represent their own input modality, and is responsible to analyse input. As this is done, the agent send the information to a common blackboard which hold the latest information from each agent. An own agent which is responsible for fusing this information to one common meaning, collects the information from the blackboard and integrate it to one joint meaning. This joint interpretation decides what should be done to which object. Since the modalities are independent of each other, other modalities could easily be added with just small changes to other parts of the system as long as it is an command or location input which agrees to the currently representation structure.
Mancini, Maurizio. « Multimodal distinctive behavior for expressive embodied conversational agents ». Paris 8, 2008. https://octaviana.fr/items/show/9956#?c=0&m=0&s=0&cv=0.
Texte intégralEmbodied Conversational Agents are a new type of computer interfaces which have human-like bodies and conversational skills. Users interacting with agents will be more engaged and participative if the agents exhibit behaviors which look coherent across different situations and emotional states. In the present work, we aim at increasing agents believability by looking at two main aspects of the problem : - an agent must be able to show its emotional state and communicative intentions not only through specific facial expressions, gestures, etc. , but also by varying the quality of its movements (e. G. , their speed, amplitude, etc. ) and the choice of the modalities that are used to communicate ; - the agent must maintain a disctinctive behavior tendency has to remain apparent during any communication ; We have developed a model to solve the two above issues. We have evaluated the realism and believability of the resulting agent's behaviors through perceptual tests and application scenario. The resulting system is highly extensible and configurable. It can also be used as a research tool to study human communication
Gaciarz, Matthis. « Régulation de trafic urbain multimodal : une modélisation multi-agents ». Thesis, Lyon, 2016. http://www.theses.fr/2016LYSE1281/document.
Texte intégralSince several decades, urban congestion is more and more widespread and deteriorate the quality of life of citizens who live in cities. Several methods are used to reduce urban congestion, notably traffic regulation and promotion of public transportation. Since the 1990's, the usage of tools from artificial intelligence, particularly distributed systems and multi-agent systems, allowed to design new methods for traffic regulation. Indeed, these methods ease to take into account the complexity of traffic-related problems with distribution. Moreover, the improvement of the communication abilities of the vehicles and the coming of autonomous vehicles allow to consider new approaches for regulation.The research work presented in this work is twofold. First we propose a method for traffic regulation at an intersection based on automatic negotiation. Our method is based on an argumentation system describing the state of the traffic and the preferences of each vehicle, relying on reasonning methods for vehicles and infrastructures. In the second part of this thesis, we propose a coordination method for buses for the rest of the traffic. This method allows a bus to coordinate in an anticipatory way with the next intersections on its trajectory, in order to define a common regulation policy allowing the bus to reach its next stop without suffering from potential congestions
Kothapalli, Satya V. V. N. « Nano-Engineered Contrast Agents : Toward Multimodal Imaging and Acoustophoresis ». Doctoral thesis, KTH, Medicinsk bildteknik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-172397.
Texte intégralQC 20150827
3MiCRON
Livres sur le sujet "Agents multimodaux"
Miehle, Juliana, Wolfgang Minker, Elisabeth André et Koichiro Yoshino, dir. Multimodal Agents for Ageing and Multicultural Societies. Singapore : Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-3476-5.
Texte intégralBöck, Ronald, Francesca Bonin, Nick Campbell et Ronald Poppe, dir. Multimodal Analyses enabling Artificial Agents in Human-Machine Interaction. Cham : Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-15557-9.
Texte intégralEmerson, Donald J., Doris Lee, Crystal M. Cummings, Jennifer Thompson, Bridget M. Wieghart et Shelly Brown. Navigating Multi-Agency NEPA Processes to Advance Multimodal Transportation Projects. Washington, D.C. : Transportation Research Board, 2016. http://dx.doi.org/10.17226/23581.
Texte intégralMüller-Jentsch, Daniel. Transport policies for the Euro-Mediterranean free-trade area : An agenda for multimodal transport reform in the southern Mediterranean. Washington, D.C : World Bank, 2002.
Trouver le texte intégralMultimodal Concepts for Integration of Cytotoxic Drugs (Medical Radiology). Springer, 2006.
Trouver le texte intégralBöck, Ronald, Francesca Bonin, Nick Campbell et Ronald Poppe. Multimodal Analyses enabling Artificial Agents in Human-Machine Interaction. Springer, 2015.
Trouver le texte intégralMehta, M. P., L. W. Brady, J. M. Brown, C. Nieder et H. P. Heilmann. Multimodal Concepts for Integration of Cytotoxic Drugs. Springer Berlin / Heidelberg, 2010.
Trouver le texte intégralAndré, Elisabeth, Wolfgang Minker, Juliana Miehle et Koichiro Yoshino. Multimodal Agents for Ageing and Multicultural Societies : Communications of NII Shonan Meetings. Springer, 2022.
Trouver le texte intégralAndré, Elisabeth, Wolfgang Minker, Juliana Miehle et Koichiro Yoshino. Multimodal Agents for Ageing and Multicultural Societies : Communications of NII Shonan Meetings. Springer Singapore Pte. Limited, 2021.
Trouver le texte intégral(Foreword), L. W. Brady, H. P. Heilmann (Foreword), M. Molls (Foreword), J. M. Brown (Editor), M. P. Mehta (Editor) et C. Nieder (Editor), dir. Multimodal Concepts for Integration of Cytotoxic Drugs (Medical Radiology / Radiation Oncology). Springer, 2006.
Trouver le texte intégralChapitres de livres sur le sujet "Agents multimodaux"
Kipp, Michael, Alexis Heloir, Marc Schröder et Patrick Gebhard. « Realizing Multimodal Behavior ». Dans Intelligent Virtual Agents, 57–63. Berlin, Heidelberg : Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-15892-6_7.
Texte intégralRehm, Matthias. « Multimodal Training Between Agents ». Dans Intelligent Virtual Agents, 348–53. Berlin, Heidelberg : Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/978-3-540-39396-2_57.
Texte intégralNiewiadomski, Radosław, et Catherine Pelachaud. « Towards Multimodal Expression of Laughter ». Dans Intelligent Virtual Agents, 231–44. Berlin, Heidelberg : Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-33197-8_24.
Texte intégralDing, Yu, Catherine Pelachaud et Thierry Artières. « Modeling Multimodal Behaviors from Speech Prosody ». Dans Intelligent Virtual Agents, 217–28. Berlin, Heidelberg : Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-40415-3_19.
Texte intégralBevacqua, Elisabetta, Sathish Pammi, Sylwia Julia Hyniewska, Marc Schröder et Catherine Pelachaud. « Multimodal Backchannels for Embodied Conversational Agents ». Dans Intelligent Virtual Agents, 194–200. Berlin, Heidelberg : Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-15892-6_21.
Texte intégralReidsma, Dennis, Herwin van Welbergen et Job Zwiers. « Multimodal Plan Representation for Adaptable BML Scheduling ». Dans Intelligent Virtual Agents, 296–308. Berlin, Heidelberg : Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-23974-8_32.
Texte intégralRiviere, Jeremy, Carole Adam, Sylvie Pesty, Catherine Pelachaud, Nadine Guiraud, Dominique Longin et Emiliano Lorini. « Expressive Multimodal Conversational Acts for SAIBA Agents ». Dans Intelligent Virtual Agents, 316–23. Berlin, Heidelberg : Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-23974-8_34.
Texte intégralMorency, Louis-Philippe, Iwan de Kok et Jonathan Gratch. « Predicting Listener Backchannels : A Probabilistic Multimodal Approach ». Dans Intelligent Virtual Agents, 176–90. Berlin, Heidelberg : Springer Berlin Heidelberg, 2008. http://dx.doi.org/10.1007/978-3-540-85483-8_18.
Texte intégralThórisson, Kristinn R., Olafur Gislason, Gudny Ragna Jonsdottir et Hrafn Th Thorisson. « A Multiparty Multimodal Architecture for Realtime Turntaking ». Dans Intelligent Virtual Agents, 350–56. Berlin, Heidelberg : Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-15892-6_37.
Texte intégralSharma, Parvesh, Amit Singh, Scott C. Brown, Niclas Bengtsson, Glenn A. Walter, Stephen R. Grobmyer, Nobutaka Iwakuma, Swadeshmukul Santra, Edward W. Scott et Brij M. Moudgil. « Multimodal Nanoparticulate Bioimaging Contrast Agents ». Dans Methods in Molecular Biology, 67–81. Totowa, NJ : Humana Press, 2010. http://dx.doi.org/10.1007/978-1-60761-609-2_5.
Texte intégralActes de conférences sur le sujet "Agents multimodaux"
Dermouche, Soumia, et Catherine Pelachaud. « Generative Model of Agent’s Behaviors in Human-Agent Interaction ». Dans ICMI '19 : INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION. New York, NY, USA : ACM, 2019. http://dx.doi.org/10.1145/3340555.3353758.
Texte intégralBiancardi, Beatrice, Angelo Cafaro et Catherine Pelachaud. « Could a virtual agent be warm and competent ? investigating user's impressions of agent's non-verbal behaviours ». Dans ICMI '17 : INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION. New York, NY, USA : ACM, 2017. http://dx.doi.org/10.1145/3139491.3139498.
Texte intégralTeraphongphom, Nutte Tarn, Margaret A. Wheatley, Peter Chhour et David P. Cormode. « Multimodal Polymeric Contrast Agents ». Dans 2013 39th Annual Northeast Bioengineering Conference (NEBEC). IEEE, 2013. http://dx.doi.org/10.1109/nebec.2013.114.
Texte intégralChaminade, Thierry. « How do artificial agents think ? » Dans ICMI '17 : INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION. New York, NY, USA : ACM, 2017. http://dx.doi.org/10.1145/3139491.3139511.
Texte intégralYalçin, Özge Nilay. « Modeling Empathy in Embodied Conversational Agents ». Dans ICMI '18 : INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION. New York, NY, USA : ACM, 2018. http://dx.doi.org/10.1145/3242969.3264977.
Texte intégralTanaka, Hiroki, Hideki Negoro, Hidemi Iwasaka et Satoshi Nakamura. « Listening Skills Assessment through Computer Agents ». Dans ICMI '18 : INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION. New York, NY, USA : ACM, 2018. http://dx.doi.org/10.1145/3242969.3242970.
Texte intégralPelachaud, Catherine. « Multimodal expressive embodied conversational agents ». Dans the 13th annual ACM international conference. New York, New York, USA : ACM Press, 2005. http://dx.doi.org/10.1145/1101149.1101301.
Texte intégralPelachaud, Catherine, et Isabella Poggi. « Multimodal communication between synthetic agents ». Dans the working conference. New York, New York, USA : ACM Press, 1998. http://dx.doi.org/10.1145/948496.948518.
Texte intégral« ADAPTATIVE MULTIMODAL ARCHITECTURES MANAGING SOFTWARE QUALITIES ». Dans 1st International Conference on Agents and Artificial Intelligence. SciTePress - Science and and Technology Publications, 2009. http://dx.doi.org/10.5220/0001665303490352.
Texte intégralBarange, Mukesh, Sandratra Rasendrasoa, Maël Bouabdelli, Julien Saunier et Alexandre Pauchet. « Multimodal adaptive empathic agent architecture ». Dans IVA '22 : ACM International Conference on Intelligent Virtual Agents. New York, NY, USA : ACM, 2022. http://dx.doi.org/10.1145/3514197.3551251.
Texte intégralRapports d'organisations sur le sujet "Agents multimodaux"
Sofge, Donald, Magdalena Bugajska, William Adams, Dennis Perzanowski et Alan Schultz. Agent-based Multimodal Interface for Dynamically Autonomous Mobile Robots. Fort Belvoir, VA : Defense Technical Information Center, janvier 2003. http://dx.doi.org/10.21236/ada434975.
Texte intégralDavid, Allan E. Microenvironment-Sensitive Multimodal Contrast Agent for Prostate Cancer Diagnosis. Fort Belvoir, VA : Defense Technical Information Center, octobre 2014. http://dx.doi.org/10.21236/ada610926.
Texte intégralLumpkin, Shamsie, Isaac Parrish, Austin Terrell et Dwayne Accardo. Pain Control : Opioid vs. Nonopioid Analgesia During the Immediate Postoperative Period. University of Tennessee Health Science Center, juillet 2021. http://dx.doi.org/10.21007/con.dnp.2021.0008.
Texte intégral