Academic literature on the topic 'Agents multimodaux'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Agents multimodaux.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Agents multimodaux":
Lisetti, C. "Le paradigme MAUI pour des agents multimodaux d'interface homme-machine socialement intelligents." Revue d'intelligence artificielle 20, no. 4-5 (October 1, 2006): 583–606. http://dx.doi.org/10.3166/ria.20.583-606.
PELACHAUD, CATHERINE, and ISABELLA POGGI. "Multimodal embodied agents." Knowledge Engineering Review 17, no. 2 (June 2002): 181–96. http://dx.doi.org/10.1017/s0269888902000218.
Agarwal, Sanchit, Jan Jezabek, Arijit Biswas, Emre Barut, Bill Gao, and Tagyoung Chung. "Building Goal-Oriented Dialogue Systems with Situated Visual Context." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 11 (June 28, 2022): 13149–51. http://dx.doi.org/10.1609/aaai.v36i11.21710.
Frullano, Luca, and Thomas J. Meade. "Multimodal MRI contrast agents." JBIC Journal of Biological Inorganic Chemistry 12, no. 7 (July 21, 2007): 939–49. http://dx.doi.org/10.1007/s00775-007-0265-3.
Relyea, Robert, Darshan Bhanushali, Abhishek Vashist, Amlan Ganguly, Andres Kwasinski, Michael E. Kuhl, and Raymond Ptucha. "Multimodal Localization for Autonomous Agents." Electronic Imaging 2019, no. 7 (January 13, 2019): 451–1. http://dx.doi.org/10.2352/issn.2470-1173.2019.7.iriacv-451.
NISHIMURA, YOSHITAKA, KAZUTAKA KUSHIDA, HIROSHI DOHI, MITSURU ISHIZUKA, JOHANE TAKEUCHI, MIKIO NAKANO, and HIROSHI TSUJINO. "DEVELOPMENT OF MULTIMODAL PRESENTATION MARKUP LANGUAGE MPML-HR FOR HUMANOID ROBOTS AND ITS PSYCHOLOGICAL EVALUATION." International Journal of Humanoid Robotics 04, no. 01 (March 2007): 1–20. http://dx.doi.org/10.1142/s0219843607000947.
Zhang, Zongren, Kexian Liang, Sharon Bloch, Mikhail Berezin, and Samuel Achilefu. "Monomolecular Multimodal Fluorescence-Radioisotope Imaging Agents." Bioconjugate Chemistry 16, no. 5 (September 2005): 1232–39. http://dx.doi.org/10.1021/bc050136s.
Taroni, Andrea. "Multimodal contrast agents combat cardiovascular disease." Materials Today 11, no. 11 (November 2008): 13. http://dx.doi.org/10.1016/s1369-7021(08)70232-3.
Kopp, Stefan, and Ipke Wachsmuth. "Synthesizing multimodal utterances for conversational agents." Computer Animation and Virtual Worlds 15, no. 1 (March 2004): 39–52. http://dx.doi.org/10.1002/cav.6.
Ebling, Ângelo Augusto, and Sylvio Péllico Netto. "MODELAGEM DE OCORRÊNCIA DE COORTES NA ESTRUTURA DIAMÉTRICA DA Araucaria angustifolia (Bertol.) Kuntze." CERNE 21, no. 2 (June 2015): 251–57. http://dx.doi.org/10.1590/01047760201521111667.
Dissertations / Theses on the topic "Agents multimodaux":
Chaker, Walid. "Modélisation multi-échelle d'environnements urbains peuplés : application aux simulations multi-agents des déplacements multimodaux." Thesis, Université Laval, 2009. http://www.theses.ulaval.ca/2009/26481/26481.pdf.
Abrilian, Sarkis. "Représentation de comportements emotionnels multimodaux spontanés : perception, annotation et synthèse." Phd thesis, Université Paris Sud - Paris XI, 2007. http://tel.archives-ouvertes.fr/tel-00620827.
Gallouedec, Quentin. "Toward the generalization of reinforcement learning." Electronic Thesis or Diss., Ecully, Ecole centrale de Lyon, 2024. http://www.theses.fr/2024ECDL0013.
Conventional Reinforcement Learning (RL) involves training a unimodal agent on a single, well-defined task, guided by a gradient-optimized reward signal. This framework does not allow us to envisage a learning agent adapted to real-world problems involving diverse modality streams, multiple tasks, often poorly defined, sometimes not defined at all. Hence, we advocate for transitioning towards a more general framework, aiming to create RL algorithms that more inherently versatile.To advance in this direction, we identify two primary areas of focus. The first aspect involves improving exploration, enabling the agent to learn from the environment with reduced dependence on the reward signal. We present Latent Go-Explore (LGE), an extension of the Go-Explore algorithm. While Go-Explore achieved impressive results, it was constrained by domain-specific knowledge. LGE overcomes these limitations, offering wider applicability within a general framework. In various tested environments, LGE consistently outperforms the baselines, showcasing its enhanced effectiveness and versatility. The second focus is to design a general-purpose agent that can operate in a variety of environments, thus involving a multimodal structure and even transcending the conventional sequential framework of RL. We introduce Jack of All Trades (JAT), a multimodal Transformer-based architecture uniquely tailored to sequential decision tasks. Using a single set of weights, JAT demonstrates robustness and versatility, competing its unique baseline on several RL benchmarks and even showing promising performance on vision and textual tasks. We believe that these two contributions are a valuable step towards a more general approach to RL. In addition, we present other methodological and technical advances that are closely related to our core research question. The first is the introduction of a set of sparsely rewarded simulated robotic environments designed to provide the community with the necessary tools for learning under conditions of low supervision. Notably, three years after its introduction, this contribution has been widely adopted by the community and continues to receive active maintenance and support. On the other hand, we present Open RL Benchmark, our pioneering initiative to provide a comprehensive and fully tracked set of RL experiments, going beyond typical data to include all algorithm-specific and system metrics. This benchmark aims to improve research efficiency by providing out-of-the-box RL data and facilitating accurate reproducibility of experiments. With its community-driven approach, it has quickly become an important resource, documenting over 25,000 runs.These technical and methodological advances, along with the scientific contributions described above, are intended to promote a more general approach to Reinforcement Learning and, we hope, represent a meaningful step toward the eventual development of a more operative RL agent
Kamoun, Mohamed. "Conception d’un système d’information pour l’aide au déplacement multimodal : une approche multi-agents pour la recherche et la composition des itinéraires en ligne." Ecole Centrale de Lille, 2007. http://tel.archives-ouvertes.fr/docs/00/14/28/46/PDF/these_kamoun.pdf.
To plan his travel, a traveller has to consult several web sites of different public transport operators. To avoid this time consuming task, this work consists in conceiving a Mobility Cooperative information system (SICM) providing a multi-modal and a multi-operators travel information. This integration system automates the itineraries search and the multi-operators routes composition. Its design is based on the multiagent system theory (MAS). The SICM tries, in fact, to make the existing operators’ information systems cooperating efficiently together, so that it can provide users with the optimized route to follow, by compiling the needed information from the different operators information sources. In this approach, the SICM is a middleware which becomes a customer among other users of the existing information systems. It can be considered as a mediator between the various distributed information sources on the one hand and the travellers on the other hand. The system should be able, at the same time, to find the needed information sources which are able to answer an itinerary request, and to gather this information in a coherent way to compose an optimized itinerary. To provide an optimized route, according to the criterion of the user, distributed and time-dependent shortest path algorithms were adopted and adapted to realize an on-line itinerary composition
Bangalore, Kantharaju Reshmashree. "Modelling Cohesive Behaviours for Virtual Agents in Multiparty Interactions." Electronic Thesis or Diss., Sorbonne université, 2021. http://www.theses.fr/2021SORUS230.
Group interactions are a commonly used form of communication among humans. Often the members of a group are involved in discussing, making decisions and exchanging ideas, under different settings (e.g., meeting, conference, party etc.). Group Cohesion describes the shared bond that drives the members to stay together and to want to work together to achieve group goals. In group interactions, humans communicate and coordinate with each other via a number of verbal and nonverbal behaviours. In this research work, as a first step we recognise the relation between group cohesion and certain non-verbal social signals of interest. Next, we present the results on automatic estimation of cohesion levels in groups using different features and feature representation techniques for groups. Virtual agents, a computer-generated animated character with human-like non-verbal behaviours, have been widely used for human-computer interactions in various applications e.g., educational agents, health coaches, training assistants etc. Most of the applications so far have focused on developing agents for dyadic interactions i.e., a single agent and user. A group of agents (multiparty) can be potentially effective in persuading, motivating and educating the users through interactive discussions. In the next step, we develop a multiparty model involving multiple autonomous agents that are capable of displaying cohesive group behaviour i.e., shared commitment to group tasks and positive relationship among the agents. Considering the surge in the range of applications using virtual agents, it is important to study the interactions between multiple agents and the user and understand the effects of using such a system. We hypothesise that the use of a multi-agent system would allow the user to be more engaged in the discussion and provide different perspectives on the same issue and facilitate the users to make informed decisions. Therefore, in the final step we conduct multiple user evaluation studies to understand the effects of multiparty interactions on the user and their perceptions e.g., the level of trust, persuasion. We present insights into the most effective form of interactions for promoting behaviour change or persuading the user using different group conversational topics. To summarise, in this thesis we recognise the association between certain non-verbal social signals and group cohesion, present the estimation accuracy using features extracted from these signals, develop a multiparty model to simulate a cohesive group of agents displaying prominent social signals and finally evaluate the effectiveness of such a model in the context of behaviour change and its effects on user’s perceptions
Fragoso, Ygara Lúcia Souza Melo. "Guibuilder multimodal : um framework para a geração de interfaces multimodais com o apoio de interaction design patterns." Universidade Federal de São Carlos, 2012. https://repositorio.ufscar.br/handle/ufscar/7641.
Approved for entry into archive by Ronildo Prado (ronisp@ufscar.br) on 2016-10-04T18:00:12Z (GMT) No. of bitstreams: 1 DissYLSM.pdf: 5457505 bytes, checksum: 5b644703e5553e5b7a06e6ad6e976868 (MD5)
Approved for entry into archive by Ronildo Prado (ronisp@ufscar.br) on 2016-10-04T18:00:21Z (GMT) No. of bitstreams: 1 DissYLSM.pdf: 5457505 bytes, checksum: 5b644703e5553e5b7a06e6ad6e976868 (MD5)
Made available in DSpace on 2016-10-04T18:13:36Z (GMT). No. of bitstreams: 1 DissYLSM.pdf: 5457505 bytes, checksum: 5b644703e5553e5b7a06e6ad6e976868 (MD5) Previous issue date: 2012-11-01
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
The interaction between humans and the computers has improved substantially during time through the evolution of interfaces of interaction. The possibility of users to interact with machines through several modalities of communication, and in a natural way, can increase the level of interest from the user and ensure the success of the application. However, the literature of the area of multimodality has shown that developing such interfaces is not a simple task, mainly for non-experienced or recently graduated professionals, since each designer’s modality of interaction has its complexity in technical terms, as acquisition and adaptation with new tools, languages, possible actions and etc. Moreover it is necessary to verify which modalities (voice, touch and gestures) can be used in the application, how to combine these modalities in a way that the stronger point of one completes the weak point of the other and vice versa, and also knowing in what context the final user will be involved. The GuiBuilder Multimodal was developed aiming to try providing the basic needs in implementing an interface that uses voice, touch and gesture. The framework promotes an interface development through the WYSIWYG (What You See Is What You Get) model, where the designer just sets some parameters so the component is multimodal. During the interface creation phase, agents supervise what the designer does and supply support, clues, with design patterns that might be divided in categories such as: multimodality, interaction with the user and components.
A interação entre humano e computador tem melhorado substancialmente ao longo do tempo através da evolução das interfaces de interação. A possibilidade de usuários interagirem com máquinas através de várias modalidades de comunicação, e de forma natural, pode aumentar o nível de interesse do usuário e garantir o sucesso da aplicação. Porém, o estado da arte da área de multimodalidade demonstra que desenvolver tais interfaces não é uma tarefa simples, principalmente para projetistas inexperientes ou recém formados, pois cada modalidade de interação tem sua complexidade em termos técnicos, como aquisição e adaptação com novas ferramentas, linguagens, ações possíveis e etc. Além disso, é preciso verificar quais modalidades (voz, toque e gestos) podem ser usadas na aplicação, como combinar essas modalidades de forma que o ponto forte de uma complemente o ponto fraco da outra e vice-versa e também saber em qual contexto o usuário final estará inserido. O GuiBuilder Multimodal foi desenvolvido com o intuito de tentar suprir as necessidades básicas em se implementar uma interface que utiliza voz, toque e gesto. O framework promove um desenvolvimento de interface através do modelo WYSIWYG (What You See Is What You Get) onde o projetista apenas define alguns parâmetros para que um componente seja multimodal. Durante a fase de criação da interface agentes supervisionam o que o designer faz e fornece um apoio, dicas, com padrões de projeto que podem ser divididos em categorias como: multimodalidade, interação com o usuário e componentes.
Bendal, Ove-Andre. "Integration of multimodal input by using agents." Thesis, Norwegian University of Science and Technology, Department of Computer and Information Science, 2005. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-9251.
Today, user interfaces normally consist of a screen, and a pointing device and a keyboard for input. However, as more advanced technology and methods appears, there should be good chances to utilize these for more natural and effective human-computer interfaces. The main motivation is to get a more natural and easy to use interface, and the computer should understand the user without too much effort from the user. Intelligent interfaces could be a solution to achieve this goal. The main focus in this thesis, is multimodal input which combines different input modalities to achieve the user's goal. A framework has been designed where the user has the possibility to change between input modalities. The system should integrate the information given in different input modalities to one joint meaning. In this architecture, input could either be location or command input, and different modalities could be used for each input type. The example described later on in this thesis combines either speech or written text as command input, with either map input or physical position for location input. An agent-based blackboard architecture are used for collecting input. Agents collect information directly from the user. Each agent represent their own input modality, and is responsible to analyse input. As this is done, the agent send the information to a common blackboard which hold the latest information from each agent. An own agent which is responsible for fusing this information to one common meaning, collects the information from the blackboard and integrate it to one joint meaning. This joint interpretation decides what should be done to which object. Since the modalities are independent of each other, other modalities could easily be added with just small changes to other parts of the system as long as it is an command or location input which agrees to the currently representation structure.
Mancini, Maurizio. "Multimodal distinctive behavior for expressive embodied conversational agents." Paris 8, 2008. https://octaviana.fr/items/show/9956#?c=0&m=0&s=0&cv=0.
Embodied Conversational Agents are a new type of computer interfaces which have human-like bodies and conversational skills. Users interacting with agents will be more engaged and participative if the agents exhibit behaviors which look coherent across different situations and emotional states. In the present work, we aim at increasing agents believability by looking at two main aspects of the problem : - an agent must be able to show its emotional state and communicative intentions not only through specific facial expressions, gestures, etc. , but also by varying the quality of its movements (e. G. , their speed, amplitude, etc. ) and the choice of the modalities that are used to communicate ; - the agent must maintain a disctinctive behavior tendency has to remain apparent during any communication ; We have developed a model to solve the two above issues. We have evaluated the realism and believability of the resulting agent's behaviors through perceptual tests and application scenario. The resulting system is highly extensible and configurable. It can also be used as a research tool to study human communication
Gaciarz, Matthis. "Régulation de trafic urbain multimodal : une modélisation multi-agents." Thesis, Lyon, 2016. http://www.theses.fr/2016LYSE1281/document.
Since several decades, urban congestion is more and more widespread and deteriorate the quality of life of citizens who live in cities. Several methods are used to reduce urban congestion, notably traffic regulation and promotion of public transportation. Since the 1990's, the usage of tools from artificial intelligence, particularly distributed systems and multi-agent systems, allowed to design new methods for traffic regulation. Indeed, these methods ease to take into account the complexity of traffic-related problems with distribution. Moreover, the improvement of the communication abilities of the vehicles and the coming of autonomous vehicles allow to consider new approaches for regulation.The research work presented in this work is twofold. First we propose a method for traffic regulation at an intersection based on automatic negotiation. Our method is based on an argumentation system describing the state of the traffic and the preferences of each vehicle, relying on reasonning methods for vehicles and infrastructures. In the second part of this thesis, we propose a coordination method for buses for the rest of the traffic. This method allows a bus to coordinate in an anticipatory way with the next intersections on its trajectory, in order to define a common regulation policy allowing the bus to reach its next stop without suffering from potential congestions
Kothapalli, Satya V. V. N. "Nano-Engineered Contrast Agents : Toward Multimodal Imaging and Acoustophoresis." Doctoral thesis, KTH, Medicinsk bildteknik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-172397.
QC 20150827
3MiCRON
Books on the topic "Agents multimodaux":
Miehle, Juliana, Wolfgang Minker, Elisabeth André, and Koichiro Yoshino, eds. Multimodal Agents for Ageing and Multicultural Societies. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-3476-5.
Böck, Ronald, Francesca Bonin, Nick Campbell, and Ronald Poppe, eds. Multimodal Analyses enabling Artificial Agents in Human-Machine Interaction. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-15557-9.
Emerson, Donald J., Doris Lee, Crystal M. Cummings, Jennifer Thompson, Bridget M. Wieghart, and Shelly Brown. Navigating Multi-Agency NEPA Processes to Advance Multimodal Transportation Projects. Washington, D.C.: Transportation Research Board, 2016. http://dx.doi.org/10.17226/23581.
Müller-Jentsch, Daniel. Transport policies for the Euro-Mediterranean free-trade area: An agenda for multimodal transport reform in the southern Mediterranean. Washington, D.C: World Bank, 2002.
Multimodal Concepts for Integration of Cytotoxic Drugs (Medical Radiology). Springer, 2006.
Böck, Ronald, Francesca Bonin, Nick Campbell, and Ronald Poppe. Multimodal Analyses enabling Artificial Agents in Human-Machine Interaction. Springer, 2015.
Mehta, M. P., L. W. Brady, J. M. Brown, C. Nieder, and H. P. Heilmann. Multimodal Concepts for Integration of Cytotoxic Drugs. Springer Berlin / Heidelberg, 2010.
André, Elisabeth, Wolfgang Minker, Juliana Miehle, and Koichiro Yoshino. Multimodal Agents for Ageing and Multicultural Societies: Communications of NII Shonan Meetings. Springer, 2022.
André, Elisabeth, Wolfgang Minker, Juliana Miehle, and Koichiro Yoshino. Multimodal Agents for Ageing and Multicultural Societies: Communications of NII Shonan Meetings. Springer Singapore Pte. Limited, 2021.
(Foreword), L. W. Brady, H. P. Heilmann (Foreword), M. Molls (Foreword), J. M. Brown (Editor), M. P. Mehta (Editor), and C. Nieder (Editor), eds. Multimodal Concepts for Integration of Cytotoxic Drugs (Medical Radiology / Radiation Oncology). Springer, 2006.
Book chapters on the topic "Agents multimodaux":
Kipp, Michael, Alexis Heloir, Marc Schröder, and Patrick Gebhard. "Realizing Multimodal Behavior." In Intelligent Virtual Agents, 57–63. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-15892-6_7.
Rehm, Matthias. "Multimodal Training Between Agents." In Intelligent Virtual Agents, 348–53. Berlin, Heidelberg: Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/978-3-540-39396-2_57.
Niewiadomski, Radosław, and Catherine Pelachaud. "Towards Multimodal Expression of Laughter." In Intelligent Virtual Agents, 231–44. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-33197-8_24.
Ding, Yu, Catherine Pelachaud, and Thierry Artières. "Modeling Multimodal Behaviors from Speech Prosody." In Intelligent Virtual Agents, 217–28. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-40415-3_19.
Bevacqua, Elisabetta, Sathish Pammi, Sylwia Julia Hyniewska, Marc Schröder, and Catherine Pelachaud. "Multimodal Backchannels for Embodied Conversational Agents." In Intelligent Virtual Agents, 194–200. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-15892-6_21.
Reidsma, Dennis, Herwin van Welbergen, and Job Zwiers. "Multimodal Plan Representation for Adaptable BML Scheduling." In Intelligent Virtual Agents, 296–308. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-23974-8_32.
Riviere, Jeremy, Carole Adam, Sylvie Pesty, Catherine Pelachaud, Nadine Guiraud, Dominique Longin, and Emiliano Lorini. "Expressive Multimodal Conversational Acts for SAIBA Agents." In Intelligent Virtual Agents, 316–23. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-23974-8_34.
Morency, Louis-Philippe, Iwan de Kok, and Jonathan Gratch. "Predicting Listener Backchannels: A Probabilistic Multimodal Approach." In Intelligent Virtual Agents, 176–90. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008. http://dx.doi.org/10.1007/978-3-540-85483-8_18.
Thórisson, Kristinn R., Olafur Gislason, Gudny Ragna Jonsdottir, and Hrafn Th Thorisson. "A Multiparty Multimodal Architecture for Realtime Turntaking." In Intelligent Virtual Agents, 350–56. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-15892-6_37.
Sharma, Parvesh, Amit Singh, Scott C. Brown, Niclas Bengtsson, Glenn A. Walter, Stephen R. Grobmyer, Nobutaka Iwakuma, Swadeshmukul Santra, Edward W. Scott, and Brij M. Moudgil. "Multimodal Nanoparticulate Bioimaging Contrast Agents." In Methods in Molecular Biology, 67–81. Totowa, NJ: Humana Press, 2010. http://dx.doi.org/10.1007/978-1-60761-609-2_5.
Conference papers on the topic "Agents multimodaux":
Dermouche, Soumia, and Catherine Pelachaud. "Generative Model of Agent’s Behaviors in Human-Agent Interaction." In ICMI '19: INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION. New York, NY, USA: ACM, 2019. http://dx.doi.org/10.1145/3340555.3353758.
Biancardi, Beatrice, Angelo Cafaro, and Catherine Pelachaud. "Could a virtual agent be warm and competent? investigating user's impressions of agent's non-verbal behaviours." In ICMI '17: INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION. New York, NY, USA: ACM, 2017. http://dx.doi.org/10.1145/3139491.3139498.
Teraphongphom, Nutte Tarn, Margaret A. Wheatley, Peter Chhour, and David P. Cormode. "Multimodal Polymeric Contrast Agents." In 2013 39th Annual Northeast Bioengineering Conference (NEBEC). IEEE, 2013. http://dx.doi.org/10.1109/nebec.2013.114.
Chaminade, Thierry. "How do artificial agents think?" In ICMI '17: INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION. New York, NY, USA: ACM, 2017. http://dx.doi.org/10.1145/3139491.3139511.
Yalçin, Özge Nilay. "Modeling Empathy in Embodied Conversational Agents." In ICMI '18: INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION. New York, NY, USA: ACM, 2018. http://dx.doi.org/10.1145/3242969.3264977.
Tanaka, Hiroki, Hideki Negoro, Hidemi Iwasaka, and Satoshi Nakamura. "Listening Skills Assessment through Computer Agents." In ICMI '18: INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION. New York, NY, USA: ACM, 2018. http://dx.doi.org/10.1145/3242969.3242970.
Pelachaud, Catherine. "Multimodal expressive embodied conversational agents." In the 13th annual ACM international conference. New York, New York, USA: ACM Press, 2005. http://dx.doi.org/10.1145/1101149.1101301.
Pelachaud, Catherine, and Isabella Poggi. "Multimodal communication between synthetic agents." In the working conference. New York, New York, USA: ACM Press, 1998. http://dx.doi.org/10.1145/948496.948518.
"ADAPTATIVE MULTIMODAL ARCHITECTURES MANAGING SOFTWARE QUALITIES." In 1st International Conference on Agents and Artificial Intelligence. SciTePress - Science and and Technology Publications, 2009. http://dx.doi.org/10.5220/0001665303490352.
Barange, Mukesh, Sandratra Rasendrasoa, Maël Bouabdelli, Julien Saunier, and Alexandre Pauchet. "Multimodal adaptive empathic agent architecture." In IVA '22: ACM International Conference on Intelligent Virtual Agents. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3514197.3551251.
Reports on the topic "Agents multimodaux":
Sofge, Donald, Magdalena Bugajska, William Adams, Dennis Perzanowski, and Alan Schultz. Agent-based Multimodal Interface for Dynamically Autonomous Mobile Robots. Fort Belvoir, VA: Defense Technical Information Center, January 2003. http://dx.doi.org/10.21236/ada434975.
David, Allan E. Microenvironment-Sensitive Multimodal Contrast Agent for Prostate Cancer Diagnosis. Fort Belvoir, VA: Defense Technical Information Center, October 2014. http://dx.doi.org/10.21236/ada610926.
Lumpkin, Shamsie, Isaac Parrish, Austin Terrell, and Dwayne Accardo. Pain Control: Opioid vs. Nonopioid Analgesia During the Immediate Postoperative Period. University of Tennessee Health Science Center, July 2021. http://dx.doi.org/10.21007/con.dnp.2021.0008.