Auswahl der wissenschaftlichen Literatur zum Thema „Multimodal agents“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Inhaltsverzeichnis
Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Multimodal agents" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Zeitschriftenartikel zum Thema "Multimodal agents"
PELACHAUD, CATHERINE, und ISABELLA POGGI. „Multimodal embodied agents“. Knowledge Engineering Review 17, Nr. 2 (Juni 2002): 181–96. http://dx.doi.org/10.1017/s0269888902000218.
Der volle Inhalt der QuelleFrullano, Luca, und Thomas J. Meade. „Multimodal MRI contrast agents“. JBIC Journal of Biological Inorganic Chemistry 12, Nr. 7 (21.07.2007): 939–49. http://dx.doi.org/10.1007/s00775-007-0265-3.
Der volle Inhalt der QuelleRelyea, Robert, Darshan Bhanushali, Abhishek Vashist, Amlan Ganguly, Andres Kwasinski, Michael E. Kuhl und Raymond Ptucha. „Multimodal Localization for Autonomous Agents“. Electronic Imaging 2019, Nr. 7 (13.01.2019): 451–1. http://dx.doi.org/10.2352/issn.2470-1173.2019.7.iriacv-451.
Der volle Inhalt der QuelleZhang, Zongren, Kexian Liang, Sharon Bloch, Mikhail Berezin und Samuel Achilefu. „Monomolecular Multimodal Fluorescence-Radioisotope Imaging Agents“. Bioconjugate Chemistry 16, Nr. 5 (September 2005): 1232–39. http://dx.doi.org/10.1021/bc050136s.
Der volle Inhalt der QuelleTaroni, Andrea. „Multimodal contrast agents combat cardiovascular disease“. Materials Today 11, Nr. 11 (November 2008): 13. http://dx.doi.org/10.1016/s1369-7021(08)70232-3.
Der volle Inhalt der QuelleKopp, Stefan, und Ipke Wachsmuth. „Synthesizing multimodal utterances for conversational agents“. Computer Animation and Virtual Worlds 15, Nr. 1 (März 2004): 39–52. http://dx.doi.org/10.1002/cav.6.
Der volle Inhalt der QuelleAgarwal, Sanchit, Jan Jezabek, Arijit Biswas, Emre Barut, Bill Gao und Tagyoung Chung. „Building Goal-Oriented Dialogue Systems with Situated Visual Context“. Proceedings of the AAAI Conference on Artificial Intelligence 36, Nr. 11 (28.06.2022): 13149–51. http://dx.doi.org/10.1609/aaai.v36i11.21710.
Der volle Inhalt der QuellePerdigon-Lagunes, Pedro, Octavio Estevez, Cristina Zorrilla Cangas und Raul Herrera-Becerra. „Gd – Gd2O3 multimodal nanoparticles as labeling agents“. MRS Advances 3, Nr. 14 (2018): 761–66. http://dx.doi.org/10.1557/adv.2018.244.
Der volle Inhalt der QuelleBurke, Benjamin P., Christopher Cawthorne und Stephen J. Archibald. „Multimodal nanoparticle imaging agents: design and applications“. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 375, Nr. 2107 (16.10.2017): 20170261. http://dx.doi.org/10.1098/rsta.2017.0261.
Der volle Inhalt der QuelleKnez, Damijan, Izidor Sosič, Ana Mitrović, Anja Pišlar, Janko Kos und Stanislav Gobec. „8-Hydroxyquinoline-based anti-Alzheimer multimodal agents“. Monatshefte für Chemie - Chemical Monthly 151, Nr. 7 (Juli 2020): 1111–20. http://dx.doi.org/10.1007/s00706-020-02651-0.
Der volle Inhalt der QuelleDissertationen zum Thema "Multimodal agents"
Bendal, Ove-Andre. „Integration of multimodal input by using agents“. Thesis, Norwegian University of Science and Technology, Department of Computer and Information Science, 2005. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-9251.
Der volle Inhalt der QuelleToday, user interfaces normally consist of a screen, and a pointing device and a keyboard for input. However, as more advanced technology and methods appears, there should be good chances to utilize these for more natural and effective human-computer interfaces. The main motivation is to get a more natural and easy to use interface, and the computer should understand the user without too much effort from the user. Intelligent interfaces could be a solution to achieve this goal. The main focus in this thesis, is multimodal input which combines different input modalities to achieve the user's goal. A framework has been designed where the user has the possibility to change between input modalities. The system should integrate the information given in different input modalities to one joint meaning. In this architecture, input could either be location or command input, and different modalities could be used for each input type. The example described later on in this thesis combines either speech or written text as command input, with either map input or physical position for location input. An agent-based blackboard architecture are used for collecting input. Agents collect information directly from the user. Each agent represent their own input modality, and is responsible to analyse input. As this is done, the agent send the information to a common blackboard which hold the latest information from each agent. An own agent which is responsible for fusing this information to one common meaning, collects the information from the blackboard and integrate it to one joint meaning. This joint interpretation decides what should be done to which object. Since the modalities are independent of each other, other modalities could easily be added with just small changes to other parts of the system as long as it is an command or location input which agrees to the currently representation structure.
Mancini, Maurizio. „Multimodal distinctive behavior for expressive embodied conversational agents“. Paris 8, 2008. https://octaviana.fr/items/show/9956#?c=0&m=0&s=0&cv=0.
Der volle Inhalt der QuelleEmbodied Conversational Agents are a new type of computer interfaces which have human-like bodies and conversational skills. Users interacting with agents will be more engaged and participative if the agents exhibit behaviors which look coherent across different situations and emotional states. In the present work, we aim at increasing agents believability by looking at two main aspects of the problem : - an agent must be able to show its emotional state and communicative intentions not only through specific facial expressions, gestures, etc. , but also by varying the quality of its movements (e. G. , their speed, amplitude, etc. ) and the choice of the modalities that are used to communicate ; - the agent must maintain a disctinctive behavior tendency has to remain apparent during any communication ; We have developed a model to solve the two above issues. We have evaluated the realism and believability of the resulting agent's behaviors through perceptual tests and application scenario. The resulting system is highly extensible and configurable. It can also be used as a research tool to study human communication
Gaciarz, Matthis. „Régulation de trafic urbain multimodal : une modélisation multi-agents“. Thesis, Lyon, 2016. http://www.theses.fr/2016LYSE1281/document.
Der volle Inhalt der QuelleSince several decades, urban congestion is more and more widespread and deteriorate the quality of life of citizens who live in cities. Several methods are used to reduce urban congestion, notably traffic regulation and promotion of public transportation. Since the 1990's, the usage of tools from artificial intelligence, particularly distributed systems and multi-agent systems, allowed to design new methods for traffic regulation. Indeed, these methods ease to take into account the complexity of traffic-related problems with distribution. Moreover, the improvement of the communication abilities of the vehicles and the coming of autonomous vehicles allow to consider new approaches for regulation.The research work presented in this work is twofold. First we propose a method for traffic regulation at an intersection based on automatic negotiation. Our method is based on an argumentation system describing the state of the traffic and the preferences of each vehicle, relying on reasonning methods for vehicles and infrastructures. In the second part of this thesis, we propose a coordination method for buses for the rest of the traffic. This method allows a bus to coordinate in an anticipatory way with the next intersections on its trajectory, in order to define a common regulation policy allowing the bus to reach its next stop without suffering from potential congestions
Kothapalli, Satya V. V. N. „Nano-Engineered Contrast Agents : Toward Multimodal Imaging and Acoustophoresis“. Doctoral thesis, KTH, Medicinsk bildteknik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-172397.
Der volle Inhalt der QuelleQC 20150827
3MiCRON
Sathiyajith, Cuhananthan Wijayanayagam. „Investigations towards new multidentate ligands as potential multimodal imaging agents“. Thesis, Cardiff University, 2010. http://orca.cf.ac.uk/55044/.
Der volle Inhalt der QuelleFragueiro, Oihane. „Developing nanoparticles as contrast agents for cell labelling and multimodal bioimaging“. Thesis, University of Liverpool, 2018. http://livrepository.liverpool.ac.uk/3028423/.
Der volle Inhalt der QuelleStocky, Thomas A. (Thomas August) 1978. „Conveying routes : multimodal generation and spatial intelligence in embodied conversational agents“. Thesis, Massachusetts Institute of Technology, 2002. http://hdl.handle.net/1721.1/87833.
Der volle Inhalt der QuelleIncludes bibliographical references (leaves 38-40).
by Thomas A. Stocky.
M.Eng.
Kilic, Nüzhet Inci. „Graphene Quantum Dots as Fluorescent and Passivation Agents for Multimodal Bioimaging“. Thesis, KTH, Tillämpad fysik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-298302.
Der volle Inhalt der QuelleSedan deras upptäckt har nolldimensionella kvantprickar av grafen (kol) uppmärksammats inom biorelaterade applikationer, särskilt för deras optiska egenskaper, kemiska stabilitet och enkelt modifierbara yta. Denna avhandling fokuserar på en grön syntesmetod av kvävedopade grafen-kvantprickar för bimodal bioavbildning med röntgenfluorescens och optisk fluorescens. Både konventionella och mikrovågs-assisterade solvotermiska syntesmetoder användes för att undersöka metodernas effekt på de syntetiserade kvantprickarna. Den mikrovågs-assisterade metoden möjliggjorde syntes av uniforma kvantprickar med exciteringsoberoende egenskaper på grund av mycket kontrollerbara reaktionsförhållanden. Det demonstrerades att den molekylära strukturen hos prekursorerna påverkade de optiska fluorescensegenskaperna hos grafen-kvantprickarna. Genom att välja specifika prekursorer erhölls kvantprickar som emitterar i både blått och rött ljus, motsvarande emissionsmaxima vid 438 respektive 605 nm under excitering vid 390 respektive 585 nm. Amin-funktionaliserade Rh-nanopartiklar valdes som en aktiv kärna för röntgenfluorescens, syntetiserad genom en mikrovågs-assisterad hydrotermisk metod med en specialdesignad sockerligand som reduktionsmedel. Dessa nanopartiklar konjugerades med blåemitterande kvantprickar genom EDC-NHS-behandling. De hybrida nanopartiklarna uppvisade grön emission (520 nm) under 490 nm excitation och ledde till en minskad cytotoxicitet uppmätt genom cellanalys i realtid (RTCA) jämfört med endast Rh-nanopartiklar, vilket framhävde passiveringsrollen som kvantprickarna spelar. Hybridkomplexet utgjorde ett multimodalt kontrastmedel för bioavbildning, vilket demonstrerades med konfokalmikroskopi (in vitro) och fantomexperiment med röntgenfluorescens.
Fayech, Besma. „Régulation des réseaux de transport multimodal : systèmes multi-agents et algorithmes évolutionnistes“. Lille 1, 2003. https://pepite-depot.univ-lille.fr/LIBRE/Th_Num/2003/50376-2003-323.pdf.
Der volle Inhalt der QuelleMihoub, Alaeddine. „Apprentissage statistique de modèles de comportement multimodal pour les agents conversationnels interactifs“. Thesis, Université Grenoble Alpes (ComUE), 2015. http://www.theses.fr/2015GREAT079/document.
Der volle Inhalt der QuelleFace to face interaction is one of the most fundamental forms of human communication. It is a complex multimodal and coupled dynamic system involving not only speech but of numerous segments of the body among which gaze, the orientation of the head, the chest and the body, the facial and brachiomanual movements, etc. The understanding and the modeling of this type of communication is a crucial stage for designing interactive agents capable of committing (hiring) credible conversations with human partners. Concretely, a model of multimodal behavior for interactive social agents faces with the complex task of generating gestural scores given an analysis of the scene and an incremental estimation of the joint objectives aimed during the conversation. The objective of this thesis is to develop models of multimodal behavior that allow artificial agents to engage into a relevant co-verbal communication with a human partner. While the immense majority of the works in the field of human-agent interaction (HAI) is scripted using ruled-based models, our approach relies on the training of statistical models from tracks collected during exemplary interactions, demonstrated by human trainers. In this context, we introduce "sensorimotor" models of behavior, which perform at the same time the recognition of joint cognitive states and the generation of the social signals in an incremental way. In particular, the proposed models of behavior have to estimate the current unit of interaction ( IU) in which the interlocutors are jointly committed and to predict the co-verbal behavior of its human trainer given the behavior of the interlocutor(s). The proposed models are all graphical models, i.e. Hidden Markov Models (HMM) and Dynamic Bayesian Networks (DBN). The models were trained and evaluated - in particular compared with classic classifiers - using datasets collected during two different interactions. Both interactions were carefully designed so as to collect, in a minimum amount of time, a sufficient number of exemplars of mutual attention and multimodal deixis of objects and places. Our contributions are completed by original methods for the interpretation and comparative evaluation of the properties of the proposed models. By comparing the output of the models with the original scores, we show that the HMM, thanks to its properties of sequential modeling, outperforms the simple classifiers in term of performances. The semi-Markovian models (HSMM) further improves the estimation of sensorimotor states thanks to duration modeling. Finally, thanks to a rich structure of dependency between variables learnt from the data, the DBN has the most convincing performances and demonstrates both the best performance and the most faithful multimodal coordination to the original multimodal events
Bücher zum Thema "Multimodal agents"
Miehle, Juliana, Wolfgang Minker, Elisabeth André und Koichiro Yoshino, Hrsg. Multimodal Agents for Ageing and Multicultural Societies. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-3476-5.
Der volle Inhalt der QuelleBöck, Ronald, Francesca Bonin, Nick Campbell und Ronald Poppe, Hrsg. Multimodal Analyses enabling Artificial Agents in Human-Machine Interaction. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-15557-9.
Der volle Inhalt der QuelleEmerson, Donald J., Doris Lee, Crystal M. Cummings, Jennifer Thompson, Bridget M. Wieghart und Shelly Brown. Navigating Multi-Agency NEPA Processes to Advance Multimodal Transportation Projects. Washington, D.C.: Transportation Research Board, 2016. http://dx.doi.org/10.17226/23581.
Der volle Inhalt der QuelleMüller-Jentsch, Daniel. Transport policies for the Euro-Mediterranean free-trade area: An agenda for multimodal transport reform in the southern Mediterranean. Washington, D.C: World Bank, 2002.
Den vollen Inhalt der Quelle findenIVA 2010 (2010 Philadelphia, Pa.). Intelligent virtual agents: 10th international conference, IVA 2010, Philadelphia, PA, USA, September 20-22, 2010 : proceedings. Berlin: Springer, 2010.
Den vollen Inhalt der Quelle findenMultimodal Concepts for Integration of Cytotoxic Drugs (Medical Radiology). Springer, 2006.
Den vollen Inhalt der Quelle findenBöck, Ronald, Francesca Bonin, Nick Campbell und Ronald Poppe. Multimodal Analyses enabling Artificial Agents in Human-Machine Interaction. Springer, 2015.
Den vollen Inhalt der Quelle findenMehta, M. P., L. W. Brady, J. M. Brown, C. Nieder und H. P. Heilmann. Multimodal Concepts for Integration of Cytotoxic Drugs. Springer Berlin / Heidelberg, 2010.
Den vollen Inhalt der Quelle findenAndré, Elisabeth, Wolfgang Minker, Juliana Miehle und Koichiro Yoshino. Multimodal Agents for Ageing and Multicultural Societies: Communications of NII Shonan Meetings. Springer, 2022.
Den vollen Inhalt der Quelle findenAndré, Elisabeth, Wolfgang Minker, Juliana Miehle und Koichiro Yoshino. Multimodal Agents for Ageing and Multicultural Societies: Communications of NII Shonan Meetings. Springer Singapore Pte. Limited, 2021.
Den vollen Inhalt der Quelle findenBuchteile zum Thema "Multimodal agents"
Kipp, Michael, Alexis Heloir, Marc Schröder und Patrick Gebhard. „Realizing Multimodal Behavior“. In Intelligent Virtual Agents, 57–63. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-15892-6_7.
Der volle Inhalt der QuelleRehm, Matthias. „Multimodal Training Between Agents“. In Intelligent Virtual Agents, 348–53. Berlin, Heidelberg: Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/978-3-540-39396-2_57.
Der volle Inhalt der QuelleNiewiadomski, Radosław, und Catherine Pelachaud. „Towards Multimodal Expression of Laughter“. In Intelligent Virtual Agents, 231–44. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-33197-8_24.
Der volle Inhalt der QuelleDing, Yu, Catherine Pelachaud und Thierry Artières. „Modeling Multimodal Behaviors from Speech Prosody“. In Intelligent Virtual Agents, 217–28. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-40415-3_19.
Der volle Inhalt der QuelleBevacqua, Elisabetta, Sathish Pammi, Sylwia Julia Hyniewska, Marc Schröder und Catherine Pelachaud. „Multimodal Backchannels for Embodied Conversational Agents“. In Intelligent Virtual Agents, 194–200. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-15892-6_21.
Der volle Inhalt der QuelleSharma, Parvesh, Amit Singh, Scott C. Brown, Niclas Bengtsson, Glenn A. Walter, Stephen R. Grobmyer, Nobutaka Iwakuma, Swadeshmukul Santra, Edward W. Scott und Brij M. Moudgil. „Multimodal Nanoparticulate Bioimaging Contrast Agents“. In Methods in Molecular Biology, 67–81. Totowa, NJ: Humana Press, 2010. http://dx.doi.org/10.1007/978-1-60761-609-2_5.
Der volle Inhalt der QuelleReidsma, Dennis, Herwin van Welbergen und Job Zwiers. „Multimodal Plan Representation for Adaptable BML Scheduling“. In Intelligent Virtual Agents, 296–308. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-23974-8_32.
Der volle Inhalt der QuelleRiviere, Jeremy, Carole Adam, Sylvie Pesty, Catherine Pelachaud, Nadine Guiraud, Dominique Longin und Emiliano Lorini. „Expressive Multimodal Conversational Acts for SAIBA Agents“. In Intelligent Virtual Agents, 316–23. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-23974-8_34.
Der volle Inhalt der QuelleMorency, Louis-Philippe, Iwan de Kok und Jonathan Gratch. „Predicting Listener Backchannels: A Probabilistic Multimodal Approach“. In Intelligent Virtual Agents, 176–90. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008. http://dx.doi.org/10.1007/978-3-540-85483-8_18.
Der volle Inhalt der QuelleThórisson, Kristinn R., Olafur Gislason, Gudny Ragna Jonsdottir und Hrafn Th Thorisson. „A Multiparty Multimodal Architecture for Realtime Turntaking“. In Intelligent Virtual Agents, 350–56. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-15892-6_37.
Der volle Inhalt der QuelleKonferenzberichte zum Thema "Multimodal agents"
Teraphongphom, Nutte Tarn, Margaret A. Wheatley, Peter Chhour und David P. Cormode. „Multimodal Polymeric Contrast Agents“. In 2013 39th Annual Northeast Bioengineering Conference (NEBEC). IEEE, 2013. http://dx.doi.org/10.1109/nebec.2013.114.
Der volle Inhalt der QuellePelachaud, Catherine. „Multimodal expressive embodied conversational agents“. In the 13th annual ACM international conference. New York, New York, USA: ACM Press, 2005. http://dx.doi.org/10.1145/1101149.1101301.
Der volle Inhalt der QuellePelachaud, Catherine, und Isabella Poggi. „Multimodal communication between synthetic agents“. In the working conference. New York, New York, USA: ACM Press, 1998. http://dx.doi.org/10.1145/948496.948518.
Der volle Inhalt der QuelleChaminade, Thierry. „How do artificial agents think?“ In ICMI '17: INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION. New York, NY, USA: ACM, 2017. http://dx.doi.org/10.1145/3139491.3139511.
Der volle Inhalt der QuelleYalçin, Özge Nilay. „Modeling Empathy in Embodied Conversational Agents“. In ICMI '18: INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION. New York, NY, USA: ACM, 2018. http://dx.doi.org/10.1145/3242969.3264977.
Der volle Inhalt der QuelleTanaka, Hiroki, Hideki Negoro, Hidemi Iwasaka und Satoshi Nakamura. „Listening Skills Assessment through Computer Agents“. In ICMI '18: INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION. New York, NY, USA: ACM, 2018. http://dx.doi.org/10.1145/3242969.3242970.
Der volle Inhalt der QuelleLofaro, Daniel, und Donald Sofge. „Multimodal Control of Lighter-Than-Air Agents“. In ICMI '18: INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION. New York, NY, USA: ACM, 2018. http://dx.doi.org/10.1145/3242969.3266296.
Der volle Inhalt der QuelleBarange, Mukesh, Sandratra Rasendrasoa, Maël Bouabdelli, Julien Saunier und Alexandre Pauchet. „Multimodal adaptive empathic agent architecture“. In IVA '22: ACM International Conference on Intelligent Virtual Agents. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3514197.3551251.
Der volle Inhalt der QuelleBlomsma, Pieter A., Guido M. Linders, Julija Vaitonyte und Max M. Louwerse. „Intrapersonal dependencies in multimodal behavior“. In IVA '20: ACM International Conference on Intelligent Virtual Agents. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3383652.3423872.
Der volle Inhalt der QuelleKiderle, Thomas, Hannes Ritschel, Silvan Mertes und Elisabeth André. „Multimodal Irony for Virtual Characters“. In IVA '23: ACM International Conference on Intelligent Virtual Agents. New York, NY, USA: ACM, 2023. http://dx.doi.org/10.1145/3570945.3607299.
Der volle Inhalt der QuelleBerichte der Organisationen zum Thema "Multimodal agents"
Sofge, Donald, Magdalena Bugajska, William Adams, Dennis Perzanowski und Alan Schultz. Agent-based Multimodal Interface for Dynamically Autonomous Mobile Robots. Fort Belvoir, VA: Defense Technical Information Center, Januar 2003. http://dx.doi.org/10.21236/ada434975.
Der volle Inhalt der QuelleDavid, Allan E. Microenvironment-Sensitive Multimodal Contrast Agent for Prostate Cancer Diagnosis. Fort Belvoir, VA: Defense Technical Information Center, Oktober 2014. http://dx.doi.org/10.21236/ada610926.
Der volle Inhalt der QuelleAcosta Urribarra, Adis, Ivys Beitia Alvear, Marta Beatriz Caballero, Zinaida Guevara Atencio, Tayra Guillén Reina, Milvia Marín Pérez, Quil Mireya Parra de Mata et al. Aprendamos Todos a Leer: guía del profesor: grado tercero 3: 2da edición. Inter-American Development Bank, April 2023. http://dx.doi.org/10.18235/0004826.
Der volle Inhalt der QuelleAcosta Urribarra, Adis, Ivys Beitia Alvear, Marta Beatriz Caballero, Zinaida Guevara Atencio, Tayra Guillén Reina, Milvia Marín Pérez, Quil Mireya Parra de Mata et al. Aprendamos Todos a Leer: guía del alumno: grado tercero 3: 2da edición. Banco Interamericano de Desarrollo, April 2023. http://dx.doi.org/10.18235/0004849.
Der volle Inhalt der QuelleLumpkin, Shamsie, Isaac Parrish, Austin Terrell und Dwayne Accardo. Pain Control: Opioid vs. Nonopioid Analgesia During the Immediate Postoperative Period. University of Tennessee Health Science Center, Juli 2021. http://dx.doi.org/10.21007/con.dnp.2021.0008.
Der volle Inhalt der QuelleEl papel de los puertos en la transición energética. Universidad de Deusto, 2022. http://dx.doi.org/10.18543/rvso1446.
Der volle Inhalt der Quelle