Literatura científica selecionada sobre o tema "Multimodal agents"
Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos
Índice
Consulte a lista de atuais artigos, livros, teses, anais de congressos e outras fontes científicas relevantes para o tema "Multimodal agents".
Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.
Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.
Artigos de revistas sobre o assunto "Multimodal agents"
PELACHAUD, CATHERINE, e ISABELLA POGGI. "Multimodal embodied agents". Knowledge Engineering Review 17, n.º 2 (junho de 2002): 181–96. http://dx.doi.org/10.1017/s0269888902000218.
Texto completo da fonteFrullano, Luca, e Thomas J. Meade. "Multimodal MRI contrast agents". JBIC Journal of Biological Inorganic Chemistry 12, n.º 7 (21 de julho de 2007): 939–49. http://dx.doi.org/10.1007/s00775-007-0265-3.
Texto completo da fonteRelyea, Robert, Darshan Bhanushali, Abhishek Vashist, Amlan Ganguly, Andres Kwasinski, Michael E. Kuhl e Raymond Ptucha. "Multimodal Localization for Autonomous Agents". Electronic Imaging 2019, n.º 7 (13 de janeiro de 2019): 451–1. http://dx.doi.org/10.2352/issn.2470-1173.2019.7.iriacv-451.
Texto completo da fonteZhang, Zongren, Kexian Liang, Sharon Bloch, Mikhail Berezin e Samuel Achilefu. "Monomolecular Multimodal Fluorescence-Radioisotope Imaging Agents". Bioconjugate Chemistry 16, n.º 5 (setembro de 2005): 1232–39. http://dx.doi.org/10.1021/bc050136s.
Texto completo da fonteTaroni, Andrea. "Multimodal contrast agents combat cardiovascular disease". Materials Today 11, n.º 11 (novembro de 2008): 13. http://dx.doi.org/10.1016/s1369-7021(08)70232-3.
Texto completo da fonteKopp, Stefan, e Ipke Wachsmuth. "Synthesizing multimodal utterances for conversational agents". Computer Animation and Virtual Worlds 15, n.º 1 (março de 2004): 39–52. http://dx.doi.org/10.1002/cav.6.
Texto completo da fonteAgarwal, Sanchit, Jan Jezabek, Arijit Biswas, Emre Barut, Bill Gao e Tagyoung Chung. "Building Goal-Oriented Dialogue Systems with Situated Visual Context". Proceedings of the AAAI Conference on Artificial Intelligence 36, n.º 11 (28 de junho de 2022): 13149–51. http://dx.doi.org/10.1609/aaai.v36i11.21710.
Texto completo da fontePerdigon-Lagunes, Pedro, Octavio Estevez, Cristina Zorrilla Cangas e Raul Herrera-Becerra. "Gd – Gd2O3 multimodal nanoparticles as labeling agents". MRS Advances 3, n.º 14 (2018): 761–66. http://dx.doi.org/10.1557/adv.2018.244.
Texto completo da fonteBurke, Benjamin P., Christopher Cawthorne e Stephen J. Archibald. "Multimodal nanoparticle imaging agents: design and applications". Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 375, n.º 2107 (16 de outubro de 2017): 20170261. http://dx.doi.org/10.1098/rsta.2017.0261.
Texto completo da fonteKnez, Damijan, Izidor Sosič, Ana Mitrović, Anja Pišlar, Janko Kos e Stanislav Gobec. "8-Hydroxyquinoline-based anti-Alzheimer multimodal agents". Monatshefte für Chemie - Chemical Monthly 151, n.º 7 (julho de 2020): 1111–20. http://dx.doi.org/10.1007/s00706-020-02651-0.
Texto completo da fonteTeses / dissertações sobre o assunto "Multimodal agents"
Bendal, Ove-Andre. "Integration of multimodal input by using agents". Thesis, Norwegian University of Science and Technology, Department of Computer and Information Science, 2005. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-9251.
Texto completo da fonteToday, user interfaces normally consist of a screen, and a pointing device and a keyboard for input. However, as more advanced technology and methods appears, there should be good chances to utilize these for more natural and effective human-computer interfaces. The main motivation is to get a more natural and easy to use interface, and the computer should understand the user without too much effort from the user. Intelligent interfaces could be a solution to achieve this goal. The main focus in this thesis, is multimodal input which combines different input modalities to achieve the user's goal. A framework has been designed where the user has the possibility to change between input modalities. The system should integrate the information given in different input modalities to one joint meaning. In this architecture, input could either be location or command input, and different modalities could be used for each input type. The example described later on in this thesis combines either speech or written text as command input, with either map input or physical position for location input. An agent-based blackboard architecture are used for collecting input. Agents collect information directly from the user. Each agent represent their own input modality, and is responsible to analyse input. As this is done, the agent send the information to a common blackboard which hold the latest information from each agent. An own agent which is responsible for fusing this information to one common meaning, collects the information from the blackboard and integrate it to one joint meaning. This joint interpretation decides what should be done to which object. Since the modalities are independent of each other, other modalities could easily be added with just small changes to other parts of the system as long as it is an command or location input which agrees to the currently representation structure.
Mancini, Maurizio. "Multimodal distinctive behavior for expressive embodied conversational agents". Paris 8, 2008. https://octaviana.fr/items/show/9956#?c=0&m=0&s=0&cv=0.
Texto completo da fonteEmbodied Conversational Agents are a new type of computer interfaces which have human-like bodies and conversational skills. Users interacting with agents will be more engaged and participative if the agents exhibit behaviors which look coherent across different situations and emotional states. In the present work, we aim at increasing agents believability by looking at two main aspects of the problem : - an agent must be able to show its emotional state and communicative intentions not only through specific facial expressions, gestures, etc. , but also by varying the quality of its movements (e. G. , their speed, amplitude, etc. ) and the choice of the modalities that are used to communicate ; - the agent must maintain a disctinctive behavior tendency has to remain apparent during any communication ; We have developed a model to solve the two above issues. We have evaluated the realism and believability of the resulting agent's behaviors through perceptual tests and application scenario. The resulting system is highly extensible and configurable. It can also be used as a research tool to study human communication
Gaciarz, Matthis. "Régulation de trafic urbain multimodal : une modélisation multi-agents". Thesis, Lyon, 2016. http://www.theses.fr/2016LYSE1281/document.
Texto completo da fonteSince several decades, urban congestion is more and more widespread and deteriorate the quality of life of citizens who live in cities. Several methods are used to reduce urban congestion, notably traffic regulation and promotion of public transportation. Since the 1990's, the usage of tools from artificial intelligence, particularly distributed systems and multi-agent systems, allowed to design new methods for traffic regulation. Indeed, these methods ease to take into account the complexity of traffic-related problems with distribution. Moreover, the improvement of the communication abilities of the vehicles and the coming of autonomous vehicles allow to consider new approaches for regulation.The research work presented in this work is twofold. First we propose a method for traffic regulation at an intersection based on automatic negotiation. Our method is based on an argumentation system describing the state of the traffic and the preferences of each vehicle, relying on reasonning methods for vehicles and infrastructures. In the second part of this thesis, we propose a coordination method for buses for the rest of the traffic. This method allows a bus to coordinate in an anticipatory way with the next intersections on its trajectory, in order to define a common regulation policy allowing the bus to reach its next stop without suffering from potential congestions
Kothapalli, Satya V. V. N. "Nano-Engineered Contrast Agents : Toward Multimodal Imaging and Acoustophoresis". Doctoral thesis, KTH, Medicinsk bildteknik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-172397.
Texto completo da fonteQC 20150827
3MiCRON
Sathiyajith, Cuhananthan Wijayanayagam. "Investigations towards new multidentate ligands as potential multimodal imaging agents". Thesis, Cardiff University, 2010. http://orca.cf.ac.uk/55044/.
Texto completo da fonteFragueiro, Oihane. "Developing nanoparticles as contrast agents for cell labelling and multimodal bioimaging". Thesis, University of Liverpool, 2018. http://livrepository.liverpool.ac.uk/3028423/.
Texto completo da fonteStocky, Thomas A. (Thomas August) 1978. "Conveying routes : multimodal generation and spatial intelligence in embodied conversational agents". Thesis, Massachusetts Institute of Technology, 2002. http://hdl.handle.net/1721.1/87833.
Texto completo da fonteIncludes bibliographical references (leaves 38-40).
by Thomas A. Stocky.
M.Eng.
Kilic, Nüzhet Inci. "Graphene Quantum Dots as Fluorescent and Passivation Agents for Multimodal Bioimaging". Thesis, KTH, Tillämpad fysik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-298302.
Texto completo da fonteSedan deras upptäckt har nolldimensionella kvantprickar av grafen (kol) uppmärksammats inom biorelaterade applikationer, särskilt för deras optiska egenskaper, kemiska stabilitet och enkelt modifierbara yta. Denna avhandling fokuserar på en grön syntesmetod av kvävedopade grafen-kvantprickar för bimodal bioavbildning med röntgenfluorescens och optisk fluorescens. Både konventionella och mikrovågs-assisterade solvotermiska syntesmetoder användes för att undersöka metodernas effekt på de syntetiserade kvantprickarna. Den mikrovågs-assisterade metoden möjliggjorde syntes av uniforma kvantprickar med exciteringsoberoende egenskaper på grund av mycket kontrollerbara reaktionsförhållanden. Det demonstrerades att den molekylära strukturen hos prekursorerna påverkade de optiska fluorescensegenskaperna hos grafen-kvantprickarna. Genom att välja specifika prekursorer erhölls kvantprickar som emitterar i både blått och rött ljus, motsvarande emissionsmaxima vid 438 respektive 605 nm under excitering vid 390 respektive 585 nm. Amin-funktionaliserade Rh-nanopartiklar valdes som en aktiv kärna för röntgenfluorescens, syntetiserad genom en mikrovågs-assisterad hydrotermisk metod med en specialdesignad sockerligand som reduktionsmedel. Dessa nanopartiklar konjugerades med blåemitterande kvantprickar genom EDC-NHS-behandling. De hybrida nanopartiklarna uppvisade grön emission (520 nm) under 490 nm excitation och ledde till en minskad cytotoxicitet uppmätt genom cellanalys i realtid (RTCA) jämfört med endast Rh-nanopartiklar, vilket framhävde passiveringsrollen som kvantprickarna spelar. Hybridkomplexet utgjorde ett multimodalt kontrastmedel för bioavbildning, vilket demonstrerades med konfokalmikroskopi (in vitro) och fantomexperiment med röntgenfluorescens.
Fayech, Besma. "Régulation des réseaux de transport multimodal : systèmes multi-agents et algorithmes évolutionnistes". Lille 1, 2003. https://pepite-depot.univ-lille.fr/LIBRE/Th_Num/2003/50376-2003-323.pdf.
Texto completo da fonteMihoub, Alaeddine. "Apprentissage statistique de modèles de comportement multimodal pour les agents conversationnels interactifs". Thesis, Université Grenoble Alpes (ComUE), 2015. http://www.theses.fr/2015GREAT079/document.
Texto completo da fonteFace to face interaction is one of the most fundamental forms of human communication. It is a complex multimodal and coupled dynamic system involving not only speech but of numerous segments of the body among which gaze, the orientation of the head, the chest and the body, the facial and brachiomanual movements, etc. The understanding and the modeling of this type of communication is a crucial stage for designing interactive agents capable of committing (hiring) credible conversations with human partners. Concretely, a model of multimodal behavior for interactive social agents faces with the complex task of generating gestural scores given an analysis of the scene and an incremental estimation of the joint objectives aimed during the conversation. The objective of this thesis is to develop models of multimodal behavior that allow artificial agents to engage into a relevant co-verbal communication with a human partner. While the immense majority of the works in the field of human-agent interaction (HAI) is scripted using ruled-based models, our approach relies on the training of statistical models from tracks collected during exemplary interactions, demonstrated by human trainers. In this context, we introduce "sensorimotor" models of behavior, which perform at the same time the recognition of joint cognitive states and the generation of the social signals in an incremental way. In particular, the proposed models of behavior have to estimate the current unit of interaction ( IU) in which the interlocutors are jointly committed and to predict the co-verbal behavior of its human trainer given the behavior of the interlocutor(s). The proposed models are all graphical models, i.e. Hidden Markov Models (HMM) and Dynamic Bayesian Networks (DBN). The models were trained and evaluated - in particular compared with classic classifiers - using datasets collected during two different interactions. Both interactions were carefully designed so as to collect, in a minimum amount of time, a sufficient number of exemplars of mutual attention and multimodal deixis of objects and places. Our contributions are completed by original methods for the interpretation and comparative evaluation of the properties of the proposed models. By comparing the output of the models with the original scores, we show that the HMM, thanks to its properties of sequential modeling, outperforms the simple classifiers in term of performances. The semi-Markovian models (HSMM) further improves the estimation of sensorimotor states thanks to duration modeling. Finally, thanks to a rich structure of dependency between variables learnt from the data, the DBN has the most convincing performances and demonstrates both the best performance and the most faithful multimodal coordination to the original multimodal events
Livros sobre o assunto "Multimodal agents"
Miehle, Juliana, Wolfgang Minker, Elisabeth André e Koichiro Yoshino, eds. Multimodal Agents for Ageing and Multicultural Societies. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-3476-5.
Texto completo da fonteBöck, Ronald, Francesca Bonin, Nick Campbell e Ronald Poppe, eds. Multimodal Analyses enabling Artificial Agents in Human-Machine Interaction. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-15557-9.
Texto completo da fonteEmerson, Donald J., Doris Lee, Crystal M. Cummings, Jennifer Thompson, Bridget M. Wieghart e Shelly Brown. Navigating Multi-Agency NEPA Processes to Advance Multimodal Transportation Projects. Washington, D.C.: Transportation Research Board, 2016. http://dx.doi.org/10.17226/23581.
Texto completo da fonteMüller-Jentsch, Daniel. Transport policies for the Euro-Mediterranean free-trade area: An agenda for multimodal transport reform in the southern Mediterranean. Washington, D.C: World Bank, 2002.
Encontre o texto completo da fonteIVA 2010 (2010 Philadelphia, Pa.). Intelligent virtual agents: 10th international conference, IVA 2010, Philadelphia, PA, USA, September 20-22, 2010 : proceedings. Berlin: Springer, 2010.
Encontre o texto completo da fonteMultimodal Concepts for Integration of Cytotoxic Drugs (Medical Radiology). Springer, 2006.
Encontre o texto completo da fonteBöck, Ronald, Francesca Bonin, Nick Campbell e Ronald Poppe. Multimodal Analyses enabling Artificial Agents in Human-Machine Interaction. Springer, 2015.
Encontre o texto completo da fonteMehta, M. P., L. W. Brady, J. M. Brown, C. Nieder e H. P. Heilmann. Multimodal Concepts for Integration of Cytotoxic Drugs. Springer Berlin / Heidelberg, 2010.
Encontre o texto completo da fonteAndré, Elisabeth, Wolfgang Minker, Juliana Miehle e Koichiro Yoshino. Multimodal Agents for Ageing and Multicultural Societies: Communications of NII Shonan Meetings. Springer, 2022.
Encontre o texto completo da fonteAndré, Elisabeth, Wolfgang Minker, Juliana Miehle e Koichiro Yoshino. Multimodal Agents for Ageing and Multicultural Societies: Communications of NII Shonan Meetings. Springer Singapore Pte. Limited, 2021.
Encontre o texto completo da fonteCapítulos de livros sobre o assunto "Multimodal agents"
Kipp, Michael, Alexis Heloir, Marc Schröder e Patrick Gebhard. "Realizing Multimodal Behavior". In Intelligent Virtual Agents, 57–63. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-15892-6_7.
Texto completo da fonteRehm, Matthias. "Multimodal Training Between Agents". In Intelligent Virtual Agents, 348–53. Berlin, Heidelberg: Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/978-3-540-39396-2_57.
Texto completo da fonteNiewiadomski, Radosław, e Catherine Pelachaud. "Towards Multimodal Expression of Laughter". In Intelligent Virtual Agents, 231–44. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-33197-8_24.
Texto completo da fonteDing, Yu, Catherine Pelachaud e Thierry Artières. "Modeling Multimodal Behaviors from Speech Prosody". In Intelligent Virtual Agents, 217–28. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-40415-3_19.
Texto completo da fonteBevacqua, Elisabetta, Sathish Pammi, Sylwia Julia Hyniewska, Marc Schröder e Catherine Pelachaud. "Multimodal Backchannels for Embodied Conversational Agents". In Intelligent Virtual Agents, 194–200. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-15892-6_21.
Texto completo da fonteSharma, Parvesh, Amit Singh, Scott C. Brown, Niclas Bengtsson, Glenn A. Walter, Stephen R. Grobmyer, Nobutaka Iwakuma, Swadeshmukul Santra, Edward W. Scott e Brij M. Moudgil. "Multimodal Nanoparticulate Bioimaging Contrast Agents". In Methods in Molecular Biology, 67–81. Totowa, NJ: Humana Press, 2010. http://dx.doi.org/10.1007/978-1-60761-609-2_5.
Texto completo da fonteReidsma, Dennis, Herwin van Welbergen e Job Zwiers. "Multimodal Plan Representation for Adaptable BML Scheduling". In Intelligent Virtual Agents, 296–308. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-23974-8_32.
Texto completo da fonteRiviere, Jeremy, Carole Adam, Sylvie Pesty, Catherine Pelachaud, Nadine Guiraud, Dominique Longin e Emiliano Lorini. "Expressive Multimodal Conversational Acts for SAIBA Agents". In Intelligent Virtual Agents, 316–23. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-23974-8_34.
Texto completo da fonteMorency, Louis-Philippe, Iwan de Kok e Jonathan Gratch. "Predicting Listener Backchannels: A Probabilistic Multimodal Approach". In Intelligent Virtual Agents, 176–90. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008. http://dx.doi.org/10.1007/978-3-540-85483-8_18.
Texto completo da fonteThórisson, Kristinn R., Olafur Gislason, Gudny Ragna Jonsdottir e Hrafn Th Thorisson. "A Multiparty Multimodal Architecture for Realtime Turntaking". In Intelligent Virtual Agents, 350–56. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-15892-6_37.
Texto completo da fonteTrabalhos de conferências sobre o assunto "Multimodal agents"
Teraphongphom, Nutte Tarn, Margaret A. Wheatley, Peter Chhour e David P. Cormode. "Multimodal Polymeric Contrast Agents". In 2013 39th Annual Northeast Bioengineering Conference (NEBEC). IEEE, 2013. http://dx.doi.org/10.1109/nebec.2013.114.
Texto completo da fontePelachaud, Catherine. "Multimodal expressive embodied conversational agents". In the 13th annual ACM international conference. New York, New York, USA: ACM Press, 2005. http://dx.doi.org/10.1145/1101149.1101301.
Texto completo da fontePelachaud, Catherine, e Isabella Poggi. "Multimodal communication between synthetic agents". In the working conference. New York, New York, USA: ACM Press, 1998. http://dx.doi.org/10.1145/948496.948518.
Texto completo da fonteChaminade, Thierry. "How do artificial agents think?" In ICMI '17: INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION. New York, NY, USA: ACM, 2017. http://dx.doi.org/10.1145/3139491.3139511.
Texto completo da fonteYalçin, Özge Nilay. "Modeling Empathy in Embodied Conversational Agents". In ICMI '18: INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION. New York, NY, USA: ACM, 2018. http://dx.doi.org/10.1145/3242969.3264977.
Texto completo da fonteTanaka, Hiroki, Hideki Negoro, Hidemi Iwasaka e Satoshi Nakamura. "Listening Skills Assessment through Computer Agents". In ICMI '18: INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION. New York, NY, USA: ACM, 2018. http://dx.doi.org/10.1145/3242969.3242970.
Texto completo da fonteLofaro, Daniel, e Donald Sofge. "Multimodal Control of Lighter-Than-Air Agents". In ICMI '18: INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION. New York, NY, USA: ACM, 2018. http://dx.doi.org/10.1145/3242969.3266296.
Texto completo da fonteBarange, Mukesh, Sandratra Rasendrasoa, Maël Bouabdelli, Julien Saunier e Alexandre Pauchet. "Multimodal adaptive empathic agent architecture". In IVA '22: ACM International Conference on Intelligent Virtual Agents. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3514197.3551251.
Texto completo da fonteBlomsma, Pieter A., Guido M. Linders, Julija Vaitonyte e Max M. Louwerse. "Intrapersonal dependencies in multimodal behavior". In IVA '20: ACM International Conference on Intelligent Virtual Agents. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3383652.3423872.
Texto completo da fonteKiderle, Thomas, Hannes Ritschel, Silvan Mertes e Elisabeth André. "Multimodal Irony for Virtual Characters". In IVA '23: ACM International Conference on Intelligent Virtual Agents. New York, NY, USA: ACM, 2023. http://dx.doi.org/10.1145/3570945.3607299.
Texto completo da fonteRelatórios de organizações sobre o assunto "Multimodal agents"
Sofge, Donald, Magdalena Bugajska, William Adams, Dennis Perzanowski e Alan Schultz. Agent-based Multimodal Interface for Dynamically Autonomous Mobile Robots. Fort Belvoir, VA: Defense Technical Information Center, janeiro de 2003. http://dx.doi.org/10.21236/ada434975.
Texto completo da fonteDavid, Allan E. Microenvironment-Sensitive Multimodal Contrast Agent for Prostate Cancer Diagnosis. Fort Belvoir, VA: Defense Technical Information Center, outubro de 2014. http://dx.doi.org/10.21236/ada610926.
Texto completo da fonteAcosta Urribarra, Adis, Ivys Beitia Alvear, Marta Beatriz Caballero, Zinaida Guevara Atencio, Tayra Guillén Reina, Milvia Marín Pérez, Quil Mireya Parra de Mata et al. Aprendamos Todos a Leer: guía del profesor: grado tercero 3: 2da edición. Inter-American Development Bank, abril de 2023. http://dx.doi.org/10.18235/0004826.
Texto completo da fonteAcosta Urribarra, Adis, Ivys Beitia Alvear, Marta Beatriz Caballero, Zinaida Guevara Atencio, Tayra Guillén Reina, Milvia Marín Pérez, Quil Mireya Parra de Mata et al. Aprendamos Todos a Leer: guía del alumno: grado tercero 3: 2da edición. Banco Interamericano de Desarrollo, abril de 2023. http://dx.doi.org/10.18235/0004849.
Texto completo da fonteLumpkin, Shamsie, Isaac Parrish, Austin Terrell e Dwayne Accardo. Pain Control: Opioid vs. Nonopioid Analgesia During the Immediate Postoperative Period. University of Tennessee Health Science Center, julho de 2021. http://dx.doi.org/10.21007/con.dnp.2021.0008.
Texto completo da fonteEl papel de los puertos en la transición energética. Universidad de Deusto, 2022. http://dx.doi.org/10.18543/rvso1446.
Texto completo da fonte