Статті в журналах з теми "Agents multimodaux"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Agents multimodaux.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Agents multimodaux".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Lisetti, C. "Le paradigme MAUI pour des agents multimodaux d'interface homme-machine socialement intelligents." Revue d'intelligence artificielle 20, no. 4-5 (October 1, 2006): 583–606. http://dx.doi.org/10.3166/ria.20.583-606.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

PELACHAUD, CATHERINE, and ISABELLA POGGI. "Multimodal embodied agents." Knowledge Engineering Review 17, no. 2 (June 2002): 181–96. http://dx.doi.org/10.1017/s0269888902000218.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
1 Believable interactive embodied agentsAmong the goals of research on autonomous agents one important aim is to build believable interactive embodied agents that are apt to application to friendly interfaces in e-commerce, tourist and service query systems, entertainment (e.g. synthetic actors) and education (pedagogical agents, agents for help and instruction to the hearing impaired).
3

Agarwal, Sanchit, Jan Jezabek, Arijit Biswas, Emre Barut, Bill Gao, and Tagyoung Chung. "Building Goal-Oriented Dialogue Systems with Situated Visual Context." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 11 (June 28, 2022): 13149–51. http://dx.doi.org/10.1609/aaai.v36i11.21710.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Goal-oriented dialogue agents can comfortably utilize the conversational context and understand its users' goals. However, in visually driven user experiences, these conversational agents are also required to make sense of the screen context in order to provide a proper interactive experience. In this paper, we propose a novel multimodal conversational framework where the dialogue agent's next action and their arguments are derived jointly conditioned both on the conversational and the visual context. We demonstrate the proposed approach via a prototypical furniture shopping experience for a multimodal virtual assistant.
4

Frullano, Luca, and Thomas J. Meade. "Multimodal MRI contrast agents." JBIC Journal of Biological Inorganic Chemistry 12, no. 7 (July 21, 2007): 939–49. http://dx.doi.org/10.1007/s00775-007-0265-3.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Relyea, Robert, Darshan Bhanushali, Abhishek Vashist, Amlan Ganguly, Andres Kwasinski, Michael E. Kuhl, and Raymond Ptucha. "Multimodal Localization for Autonomous Agents." Electronic Imaging 2019, no. 7 (January 13, 2019): 451–1. http://dx.doi.org/10.2352/issn.2470-1173.2019.7.iriacv-451.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

NISHIMURA, YOSHITAKA, KAZUTAKA KUSHIDA, HIROSHI DOHI, MITSURU ISHIZUKA, JOHANE TAKEUCHI, MIKIO NAKANO, and HIROSHI TSUJINO. "DEVELOPMENT OF MULTIMODAL PRESENTATION MARKUP LANGUAGE MPML-HR FOR HUMANOID ROBOTS AND ITS PSYCHOLOGICAL EVALUATION." International Journal of Humanoid Robotics 04, no. 01 (March 2007): 1–20. http://dx.doi.org/10.1142/s0219843607000947.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Animated agents that act and speak as attendants to guests on shopping web sites are becoming increasingly popular. Inspired by this development, we propose a new method of presentation using a humanoid robot. Humanoid presentations are effective in a real environment because they can move and look around at the audience similar to a human presenter. We developed a simple script language for multimodal presentations by a humanoid robot called MPML-HR, which is a descendant of the Multimodal Presentation Markup Language (MPML) originally developed for animated agents. MPML-HR allows many non-specialists to easily write multimodal presentations for a humanoid robot. We further evaluated humanoid robots' presentation ability using MPML-HR to find the difference in audience impressions between the humanoid robot and the animated agent. Psychological evaluation was conducted to compare the impressions of a humanoid robot's presentation with an animated agent's presentation. Using the Semantic Differential (SD) method and direct questioning, we measured the difference in audience impressions between an animated agent and a humanoid robot.
7

Zhang, Zongren, Kexian Liang, Sharon Bloch, Mikhail Berezin, and Samuel Achilefu. "Monomolecular Multimodal Fluorescence-Radioisotope Imaging Agents." Bioconjugate Chemistry 16, no. 5 (September 2005): 1232–39. http://dx.doi.org/10.1021/bc050136s.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Taroni, Andrea. "Multimodal contrast agents combat cardiovascular disease." Materials Today 11, no. 11 (November 2008): 13. http://dx.doi.org/10.1016/s1369-7021(08)70232-3.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Kopp, Stefan, and Ipke Wachsmuth. "Synthesizing multimodal utterances for conversational agents." Computer Animation and Virtual Worlds 15, no. 1 (March 2004): 39–52. http://dx.doi.org/10.1002/cav.6.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Ebling, Ângelo Augusto, and Sylvio Péllico Netto. "MODELAGEM DE OCORRÊNCIA DE COORTES NA ESTRUTURA DIAMÉTRICA DA Araucaria angustifolia (Bertol.) Kuntze." CERNE 21, no. 2 (June 2015): 251–57. http://dx.doi.org/10.1590/01047760201521111667.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Estudos referentes à estrutura diamétrica das florestas nativas são essenciais para o entendimento do desenvolvimento, fornecer parâmetros do crescimento e produção suficientes para gerar estimativas que subsidiem o manejo sustentado. No entanto, a modelagem matemática de funções probabilísticas, como as de densidade, tornam-se de difícil aplicação em distribuições multimodais. A espécie Araucaria angustifolia (Bertol.) Kuntze, de importância social, ambiental e econômica, apresenta padrão de distribuição multimodal, formando unidades demográficas denominadas de coortes, que se originam, em decorrência de agentes antrópicos e naturais que atuam nos nichos. Logo, tomando com base dados inventariados de árvores com diâmetro à altura do peito, igual ou maior que 9,5 cm (DAP≥9,5 cm), oriundos da Floresta Nacional de São Francisco de Paula, RS, foram testadas diferentes funções densidade de probabilidade. O melhor ajuste à série de dados consiste em uma função truncada de polinômio de sétimo grau, que, além de manter valores ajustados muito próximos aos observados, manteve a configuração multimodal da distribuição.
11

Perdigon-Lagunes, Pedro, Octavio Estevez, Cristina Zorrilla Cangas, and Raul Herrera-Becerra. "Gd – Gd2O3 multimodal nanoparticles as labeling agents." MRS Advances 3, no. 14 (2018): 761–66. http://dx.doi.org/10.1557/adv.2018.244.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
ABSTRACTLanthanide nanoparticles had the possibility to couple many imaging techniques into a sole labeling agent has awaken high expectations on personalized medicine or <<Theranostics>>. This is possible due to their intrinsic physic – chemical properties. Combining different imaging techniques physicians may provide a better treatment and perform surgical procedures that might increase the survival rate of patients. Hence, there is an enormous opportunity area for the development of lanthanide multimodal nanoparticles. For this study, we synthesized Gd – Gd2O3 nanoparticles at room temperature by reduction method assisted by Tannic acid, and later we doped them with different ratios of Eu. The nanoparticles were analyzed through high resolution microscopy (HRTEM), Raman Spectroscopy, luminescence, and magnetic characterizations. We found small nanoparticles with a mean size of 5 nm, covered in a carbonaceous layer. In addition, different emissions were detected depending on Eu concentration. Finally, the magnetization vs. temperature recorded under zero field cooled (ZFC) and field cooled (FC) conditions exhibit an antiferromagnetic to ferromagnetic phase transition in samples with Gd2O3, and hysteresis loops recorded at 100 Oe and 2 K showed a relevant magnetization without magnetic remanence. Hence, these nanomaterials have interesting properties to be tested in biocompatibility assays.
12

Burke, Benjamin P., Christopher Cawthorne, and Stephen J. Archibald. "Multimodal nanoparticle imaging agents: design and applications." Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 375, no. 2107 (October 16, 2017): 20170261. http://dx.doi.org/10.1098/rsta.2017.0261.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Molecular imaging, where the location of molecules or nanoscale constructs can be tracked in the body to report on disease or biochemical processes, is rapidly expanding to include combined modality or multimodal imaging. No single imaging technique can offer the optimum combination of properties (e.g. resolution, sensitivity, cost, availability). The rapid technological advances in hardware to scan patients, and software to process and fuse images, are pushing the boundaries of novel medical imaging approaches, and hand-in-hand with this is the requirement for advanced and specific multimodal imaging agents. These agents can be detected using a selection from radioisotope, magnetic resonance and optical imaging, among others. Nanoparticles offer great scope in this area as they lend themselves, via facile modification procedures, to act as multifunctional constructs. They have relevance as therapeutics and drug delivery agents that can be tracked by molecular imaging techniques with the particular development of applications in optically guided surgery and as radiosensitizers. There has been a huge amount of research work to produce nanoconstructs for imaging, and the parameters for successful clinical translation and validation of therapeutic applications are now becoming much better understood. It is an exciting time of progress for these agents as their potential is closer to being realized with translation into the clinic. The coming 5–10 years will be critical, as we will see if the predicted improvement in clinical outcomes becomes a reality. Some of the latest advances in combination modality agents are selected and the progression pathway to clinical trials analysed. This article is part of the themed issue ‘Challenges for chemistry in molecular imaging’.
13

Knez, Damijan, Izidor Sosič, Ana Mitrović, Anja Pišlar, Janko Kos, and Stanislav Gobec. "8-Hydroxyquinoline-based anti-Alzheimer multimodal agents." Monatshefte für Chemie - Chemical Monthly 151, no. 7 (July 2020): 1111–20. http://dx.doi.org/10.1007/s00706-020-02651-0.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Čereković, Aleksandra, and Igor S. Pandžić. "Multimodal behavior realization for embodied conversational agents." Multimedia Tools and Applications 54, no. 1 (April 22, 2010): 143–64. http://dx.doi.org/10.1007/s11042-010-0530-2.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Kim, Chulhong, and Zhongping Chen. "Multimodal photoacoustic imaging: systems, applications, and agents." Biomedical Engineering Letters 8, no. 2 (May 2018): 137–38. http://dx.doi.org/10.1007/s13534-018-0071-6.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Pliuškevičius, Regimantas. "Logic of knowledge with infinitely many agents." Lietuvos matematikos rinkinys 46 (September 21, 2023): 247–52. http://dx.doi.org/10.15388/lmr.2006.30719.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Prodi, L., E. Rampazzo, F. Rastrelli, A. Speghini, and N. Zaccheroni. "Imaging agents based on lanthanide doped nanoparticles." Chemical Society Reviews 44, no. 14 (2015): 4922–52. http://dx.doi.org/10.1039/c4cs00394b.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Bryndin, Evgeniy. "Robotics by multimodal self-organizing ensembles of software and hardware agents with artificial intelligence." Research on Intelligent Manufacturing and Assembly 2, no. 1 (May 20, 2024): 60–69. http://dx.doi.org/10.25082/rima.2023.01.003.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Self-organizing ensembles of software and hardware agents with artificial intelligence model the intellectual abilities of a person's natural intelligence. The Creator endowed man with various types of intellectual abilities: generation of meanings, perception of meanings, meaningful actions and behavior, sensory reaction to meanings, emotional reaction to meanings. Based on the synergy of various intellectual abilities, a person carries out life activities. For example, Dialogue is conducted on the basis of two intellectual abilities: the generation and perception of meanings. A multimodal self-organizing ensemble of intelligent software and hardware agents with artificial intelligence, based on existing knowledge and skills, is able to write poetry, draw pictures, give recommendations and solutions to specialists, manage production and systems in various sectors of the economy, and take part in scientific research. Multimodal ensembles of intelligent agents, modeling the functions of natural intelligence, contain a functional control structure. To ensure the safe and reliable use of multimodal ensembles of intelligent agents, they are being standardized internationally under the guidance of ISO. International standardization of multimodal ensembles of intelligent agents expands the market and reduces the risks of their use.
19

Choi, Eunjoo, Francis Sahngun Nahm, Woong Ki Han, Pyung-Bok Lee, and Jihun Jo. "Topical agents: a thoughtful choice for multimodal analgesia." Korean Journal of Anesthesiology 73, no. 5 (October 1, 2020): 384–93. http://dx.doi.org/10.4097/kja.20357.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
For over a thousand years, various substances have been applied to the skin to treat pain. Some of these substances have active ingredients that we still use today. However, some have been discontinued due to their harmful effect, while others have been long forgotten. Recent concerns regarding the cardiovascular and renal risk from nonsteroidal anti-inflammatory drugs, and issues with opioids, have resulted in increasing demand and attention to non-systemic topical alternatives. There is increasing evidence of the efficacy and safety of topical agents in pain control. Topical analgesics are great alternatives for pain management and are an essential part of multimodal analgesia. This review aims to describe essential aspects of topical drugs that physicians should consider in their practice as part of multimodal analgesia. This review describes the mechanism of popular topical analgesics and also introduces the most recently released and experimental topical medications.
20

Yang, Chang-Tong, Parasuraman Padmanabhan, and Balázs Z. Gulyás. "Gadolinium(iii) based nanoparticles for T1-weighted magnetic resonance imaging probes." RSC Advances 6, no. 65 (2016): 60945–66. http://dx.doi.org/10.1039/c6ra07782j.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Braşoveanu, Adrian, Adriana Manolescu, and Marian Nicu Spînu. "Generic Multimodal Ontologies for Human-Agent Interaction." International Journal of Computers Communications & Control 5, no. 5 (December 1, 2010): 625. http://dx.doi.org/10.15837/ijccc.2010.5.2218.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Watching the evolution of the Semantic Web (SW) from its inception to these days we can easily observe that the main task the developers face while building it is to encode the human knowledge into ontologies and the human reasoning into dedicated reasoning engines. Now, the SW needs to have efficient mechanisms to access information by both humans and artificial agents. The most important tools in this context are ontologies. The last years have been dedicated to solving the infrastructure problems related to ontologies: ontology management, ontology matching, ontology adoption, but as time goes by and these problems are better understood the research interests in this area will surely shift towards the way in which agents will use them to communicate between them and with humans. Despite the fact that interface agents could be bilingual, it would be more efficient, safe and swift that they should use the same language to communicate with humans and with their peers. Since anthropocentric systems entail nowadays multimodal interfaces, it seems suitable to build multimodal ontologies. Generic ontologies are needed when dealing with uncertainty. Multimodal ontologies should be designed taking into account our way of thinking (mind maps, visual thinking, feedback, logic, emotions, etc.) and also the processes in which they would be involved (multimodal fusion and integration, error reduction, natural language processing, multimodal fission, etc.). By doing this it would be easier for us (and also fun) to use ontologies, but in the same time the communication with agents (and also agent to agent talk) would be enhanced. This is just one of our conclusions related to why building generic multimodal ontologies is very important for future semantic web applications.
22

Gangi, Alexandra, and Shelly C. Lu. "Chemotherapy-associated liver injury in colorectal cancer." Therapeutic Advances in Gastroenterology 13 (January 2020): 175628482092419. http://dx.doi.org/10.1177/1756284820924194.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Patients with colorectal cancer (CRC) have benefited significantly from advances in multimodal treatment with significant improvements in long-term survival. More patients are currently being treated with surgical resection or ablation following neoadjuvant or adjuvant chemotherapy. However, several cytotoxic agents that are administered routinely have been linked to liver toxicities that impair liver function and regeneration. Recognition of chemotherapy-related liver toxicity emphasizes the importance of multidisciplinary planning to optimize care. This review aims to summarize current data on multimodal treatment concepts for CRC, provide an overview of liver damage caused by commonly administered chemotherapeutic agents, and evaluate currently suggested protective agents.
23

Caro, Carlos, Jose M. Paez-Muñoz, Ana M. Beltrán, Manuel Pernia Leal, and María Luisa García-Martín. "PEGylated Terbium-Based Nanorods as Multimodal Bioimaging Contrast Agents." ACS Applied Nano Materials 4, no. 4 (March 19, 2021): 4199–207. http://dx.doi.org/10.1021/acsanm.1c00569.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Nakatsu, Ryohei. "Agent Software Technologies. Multimodal Interface of Human-like Agents." Journal of the Institute of Image Information and Television Engineers 52, no. 4 (1998): 431–35. http://dx.doi.org/10.3169/itej.52.431.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Shashkov, Evgeny V., Maaike Everts, Ekaterina I. Galanzha, and Vladimir P. Zharov. "Quantum Dots as Multimodal Photoacoustic and Photothermal Contrast Agents." Nano Letters 8, no. 11 (November 12, 2008): 3953–58. http://dx.doi.org/10.1021/nl802442x.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Altube, M. J., M. J. Morilla, and E. L. Romero. "Nanostructures as Robust Multimodal Anti-Bothrops Snake Envenoming Agents." IFAC-PapersOnLine 51, no. 27 (2018): 7–9. http://dx.doi.org/10.1016/j.ifacol.2018.11.599.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Chitgupi, Upendra, and Jonathan F. Lovell. "Naphthalocyanines as contrast agents for photoacoustic and multimodal imaging." Biomedical Engineering Letters 8, no. 2 (March 7, 2018): 215–21. http://dx.doi.org/10.1007/s13534-018-0059-2.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Rieffel, James, Upendra Chitgupi, and Jonathan F. Lovell. "Recent Advances in Higher-Order, Multimodal, Biomedical Imaging Agents." Small 11, no. 35 (July 16, 2015): 4445–61. http://dx.doi.org/10.1002/smll.201500735.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
29

George, Stephy, and Meagan Johns. "Review of nonopioid multimodal analgesia for surgical and trauma patients." American Journal of Health-System Pharmacy 77, no. 24 (October 10, 2020): 2052–63. http://dx.doi.org/10.1093/ajhp/zxaa301.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract Purpose Pain is a frequent finding in surgical and trauma patients, and effective pain control remains a common challenge in the hospital setting. Opioids have traditionally been the foundation of pain management; however, these agents are associated with various adverse effects and risks of dependence and diversion. Summary In response to the rising national opioid epidemic and the various risks associated with opioid use, multimodal pain management through use of nonopioid analgesics such as acetaminophen, nonsteroidal anti-inflammatory drugs, α 2 agonists, N-methyl-d-aspartate (NMDA) receptor antagonists, skeletal muscle relaxants, sodium channel blockers, and local anesthetics has gained popularity recently. Multimodal analgesia has synergistic therapeutic effects and can decrease adverse effects by enabling use of lower doses of each agent in the multimodal regimen. This review discusses properties of the various nonopioid analgesics and encourages pharmacists to play an active role in the selection, initiation, and dose-titration of multimodal analgesia. The choice of nonopioid agents should be based on patient comorbidities, hemodynamic stability, and the agents’ respective adverse effect profiles. A multidisciplinary plan for management of pain should be formulated during transitions of care and is an area of opportunity for pharmacists to improve patient care. Conclusion Multimodal analgesia effectively treats pain while decreasing adverse effects. There is mounting evidence to support use of this strategy to decrease opioid use. As medication experts, pharmacists can play a key role in the selection, initiation, and dose-titration of analgesic agents based on patient-specific factors.
30

Guo, Chong, Shouyi Fan, Chaoyi Chen, Wenbo Zhao, Jiawei Wang, Yao Zhang, and Yanhong Chen. "Query-Informed Multi-Agent Motion Prediction." Sensors 24, no. 1 (December 19, 2023): 9. http://dx.doi.org/10.3390/s24010009.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In a dynamic environment, autonomous driving vehicles require accurate decision-making and trajectory planning. To achieve this, autonomous vehicles need to understand their surrounding environment and predict the behavior and future trajectories of other traffic participants. In recent years, vectorization methods have dominated the field of motion prediction due to their ability to capture complex interactions in traffic scenes. However, existing research using vectorization methods for scene encoding often overlooks important physical information about vehicles, such as speed and heading angle, relying solely on displacement to represent the physical attributes of agents. This approach is insufficient for accurate trajectory prediction models. Additionally, agents’ future trajectories can be diverse, such as proceeding straight or making left or right turns at intersections. Therefore, the output of trajectory prediction models should be multimodal to account for these variations. Existing research has used multiple regression heads to output future trajectories and confidence, but the results have been suboptimal. To address these issues, we propose QINET, a method for accurate multimodal trajectory prediction for all agents in a scene. In the scene encoding part, we enhance the feature attributes of agent vehicles to better represent the physical information of agents in the scene. Our scene representation also possesses rotational and spatial invariance. In the decoder part, we use cross-attention and induce the generation of multimodal future trajectories by employing a self-learned query matrix. Experimental results demonstrate that QINET achieves state-of-the-art performance on the Argoverse motion prediction benchmark and is capable of fast multimodal trajectory prediction for multiple agents.
31

Lee, Jin A., and Michael Wright. "126 Multimodal Analgesia and Discharge Opioid Requirements in Burn Patients." Journal of Burn Care & Research 41, Supplement_1 (March 2020): S85. http://dx.doi.org/10.1093/jbcr/iraa024.129.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract Introduction Analgesia in burn patients is significantly challenging given the complexity of burn pain. Opioids are a mainstay of therapy, but studies demonstrate varying outcomes with respect to the efficacy of adjunctive non-opioid agents in the treatment of burn pain. The need for analgesia extends beyond hospital admission—given the known risks of opioids, the impact of multimodal analgesia on opioid requirements post-discharge needs to be further elucidated in this population. Methods In this retrospective, single-center cohort study, adult burn patients who were consecutively admitted to the burn ICU service and subsequently followed in the burn clinic between 2/2015 and 9/2018 were evaluated up to 6 months post-discharge. The subjects were divided into two cohorts based on discharge pain regimens: multimodal vs non-multimodal. Individuals taking long-acting opioids prior to admission were excluded. The primary outcome was the change in oral morphine equivalents (OME) between discharge and follow up occurring between 2 - 6 weeks post-discharge. Secondary outcomes included the number of multimodal agents utilized and a comparison of OME between the last 24 hours of admission and discharge. Results A total of 152 patients were included for analysis (n= 76 per cohort). The multimodal cohort demonstrated increased total body surface area burned (23.9% ± 15.4 vs 16.6% ± 7.1; p &lt; 0.001) and prolonged number of days spent in the ICU (22.7 ± 23.1 vs 10.7 ± 8.9; p &lt; 0.001). The change in OME from discharge to first follow up was -106.6 mg in the multimodal vs -75.4 mg in the non-multimodal cohort (p = 0.039; figure 1). In each cohort, discharge OME did not statistically differ from last 24 hour OME (multimodal: p = 0.067; non-multimodal: p = 0.537). The most common non-opioid agents utilized were acetaminophen and gabapentin. Conclusions Despite extended ICU length of stay and larger TBSA, burn patients discharged with multimodal pain regimens demonstrated a statistically significant reduction in oral morphine equivalents from discharge to first follow up compared to those discharged on opioid-only regimens. Applicability of Research to Practice This study demonstrates promising results with respect to lowering discharge opioid requirements by utilizing a multimodal analgesic approach in the management of burn pain.
32

Zhang, Wei, Xuesong Wang, Haoyu Wang, and Yuhu Cheng. "Causal Meta-Reinforcement Learning for Multimodal Remote Sensing Data Classification." Remote Sensing 16, no. 6 (March 16, 2024): 1055. http://dx.doi.org/10.3390/rs16061055.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Multimodal remote sensing data classification can enhance a model’s ability to distinguish land features through multimodal data fusion. In this context, how to help models understand the relationship between multimodal data and target tasks has become the focus of researchers. Inspired by the human feedback learning mechanism, causal reasoning mechanism, and knowledge induction mechanism, this paper integrates causal learning, reinforcement learning, and meta learning into a unified remote sensing data classification framework and proposes causal meta-reinforcement learning (CMRL). First, based on the feedback learning mechanism, we overcame the limitations of traditional implicit optimization of fusion features and customized a reinforcement learning environment for multimodal remote sensing data classification tasks. Through feedback interactive learning between agents and the environment, we helped the agents understand the complex relationships between multimodal data and labels, thereby achieving full mining of multimodal complementary information.Second, based on the causal inference mechanism, we designed causal distribution prediction actions, classification rewards, and causal intervention rewards, capturing pure causal factors in multimodal data and preventing false statistical associations between non-causal factors and class labels. Finally, based on the knowledge induction mechanism, we designed a bi-layer optimization mechanism based on meta-learning. By constructing a meta training task and meta validation task simulation model in the generalization scenario of unseen data, we helped the model induce cross-task shared knowledge, thereby improving its generalization ability for unseen multimodal data. The experimental results on multiple sets of multimodal datasets showed that the proposed method achieved state-of-the-art performance in multimodal remote sensing data classification tasks.
33

Biju, Vasudevanpillai, Morihiko Hamada, Kenji Ono, Sakiko Sugino, Takashi Ohnishi, Edakkattuparambil Sidharth Shibu, Shohei Yamamura, et al. "Nanoparticles speckled by ready-to-conjugate lanthanide complexes for multimodal imaging." Nanoscale 7, no. 36 (2015): 14829–37. http://dx.doi.org/10.1039/c5nr00959f.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Richer, Robert, Nan Zhao, Bjoern M. Eskofier, and Joseph A. Paradiso. "Exploring Smart Agents for the Interaction with Multimodal Mediated Environments." Multimodal Technologies and Interaction 4, no. 2 (June 6, 2020): 27. http://dx.doi.org/10.3390/mti4020027.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
After conversational agents have been made available to the broader public, we speculate that applying them as a mediator for adaptive environments reduces control complexity and increases user experience by providing a more natural interaction. We implemented and tested four agents, each of them differing in their system intelligence and input modality, as personal assistants for Mediated Atmospheres, an adaptive smart office prototype. They were evaluated in a user study ( N = 33 ) to collect subjective and objective measures. Results showed that a smartphone application was the most favorable system, followed by conversational text and voice agents that were perceived as being more engaging and intelligent than a non-conversational voice agent. Significant differences were observed between native and non-native speakers in both subjective and objective measures. Our findings reveal the potential of conversational agents for the interaction with adaptive environments to reduce work and information overload.
35

Sivakumar, Balasubramanian, Ravindran Girija Aswathy, Rebeca Romero-Aburto, Trevor Mitcham, Keith A. Mitchel, Yutaka Nagaoka, Richard R. Bouchard, Pulickel M. Ajayan, Toru Maekawa, and Dasappan Nair Sakthikumar. "Highly versatile SPION encapsulated PLGA nanoparticles as photothermal ablators of cancer cells and as multimodal imaging agents." Biomaterials Science 5, no. 3 (2017): 432–43. http://dx.doi.org/10.1039/c6bm00621c.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Wijaya, Andy, Ali Maruf, Wei Wu, and Guixue Wang. "Recent advances in micro- and nano-bubbles for atherosclerosis applications." Biomaterials Science 8, no. 18 (2020): 4920–39. http://dx.doi.org/10.1039/d0bm00762e.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Doyle-Jones, Carol. "Teachers’ Perspectives on Building Spaces for Students To Be Change Agents." Journal of the Canadian Association for Curriculum Studies 18, no. 1 (June 27, 2020): 24–25. http://dx.doi.org/10.25071/1916-4467.40568.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This research project is centered around five participating elementary teachers and how they created space for multimodal, literacy-based curricula in their classrooms using culturally responsive teaching (Nieto, 2017, 2018) for social justice. This presentation is a reflection on this focus. For example, Cassidy, a grade three teacher in a diverse urban school, shared that her pedagogy is “always social justice because I want my kids to be change agents if they can be. If I can just teach them how to be more understanding of each other and of humanity; and just be able to see people as humans.” The teachers shared how attending professional development workshops on oppression and discrimination helped them build resources that focused on inclusion and equity. Comprehensive interviews and analyses of pedagogical tools and designs provided knowledge about the multimodal resources, tools and activities that helped the teachers build effective social justice-focused curriculum. The interviews and analyses also provided insight into their struggles and successes as well as their ongoing pedagogical goals. A New Literacies perspective (Coiro et al., 2008) helped reveal how these teachers collaborate, plan and create equitable learning spaces and opportunities in their classrooms. The study also highlights how these teachers built opportunities for multimodal teaching and learning and how they developed their culturally responsive teaching practices.
38

Rach, Niklas, Klaus Weber, Yuchi Yang, Stefan Ultes, Elisabeth André, and Wolfgang Minker. "EVA 2.0: Emotional and rational multimodal argumentation between virtual agents." it - Information Technology 63, no. 1 (February 1, 2021): 17–30. http://dx.doi.org/10.1515/itit-2020-0050.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract Persuasive argumentation depends on multiple aspects, which include not only the content of the individual arguments, but also the way they are presented. The presentation of arguments is crucial – in particular in the context of dialogical argumentation. However, the effects of different discussion styles on the listener are hard to isolate in human dialogues. In order to demonstrate and investigate various styles of argumentation, we propose a multi-agent system in which different aspects of persuasion can be modelled and investigated separately. Our system utilizes argument structures extracted from text-based reviews for which a minimal bias of the user can be assumed. The persuasive dialogue is modelled as a dialogue game for argumentation that was motivated by the objective to enable both natural and flexible interactions between the agents. In order to support a comparison of factual against affective persuasion approaches, we implemented two fundamentally different strategies for both agents: The logical policy utilizes deep Reinforcement Learning in a multi-agent setup to optimize the strategy with respect to the game formalism and the available argument. In contrast, the emotional policy selects the next move in compliance with an agent emotion that is adapted to user feedback to persuade on an emotional level. The resulting interaction is presented to the user via virtual avatars and can be rated through an intuitive interface.
39

Djenidi, Hicham, Amar Ramdane-Cherif, Chakib Tadj, and Nicole Levy. "Generic Pipelined Multi-Agents Architecture for Multimedia Multimodal Software Environment." Journal of Object Technology 3, no. 8 (2004): 147. http://dx.doi.org/10.5381/jot.2004.3.8.a3.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Jeong Kim, Tae, Kwon Seok Chae, Yongmin Chang, and Gang Ho Lee. "Gadolinium Oxide Nanoparticles as Potential Multimodal Imaging and Therapeutic Agents." Current Topics in Medicinal Chemistry 13, no. 4 (March 1, 2013): 422–33. http://dx.doi.org/10.2174/1568026611313040003.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Lin, Yan, Zhi-Yi Chen, and Feng Yang. "Ultrasound-Based Multimodal Molecular Imaging and Functional Ultrasound Contrast Agents." Current Pharmaceutical Design 19, no. 18 (April 1, 2013): 3342–51. http://dx.doi.org/10.2174/1381612811319180016.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Lin, Yan, Zhi-Yi Chen, and Feng Yang. "Ultrasound-Based Multimodal Molecular Imaging and Functional Ultrasound Contrast Agents." Current Pharmaceutical Design 999, no. 999 (March 1, 2013): 6–10. http://dx.doi.org/10.2174/13816128113198880008.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
43

GRIOL, David, Jose Manuel MOLINA, and Araceli SANCHÍS DE MIGUEL. "Developing multimodal conversational agents for an enhanced e-learning experience." ADCAIJ: ADVANCES IN DISTRIBUTED COMPUTING AND ARTIFICIAL INTELLIGENCE JOURNAL 3, no. 8 (October 14, 2014): 13. http://dx.doi.org/10.14201/adcaij2014381326.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Rieter, William J., Kathryn M. L. Taylor, Hongyu An, Weili Lin, and Wenbin Lin. "Nanoscale Metal−Organic Frameworks as Potential Multimodal Contrast Enhancing Agents." Journal of the American Chemical Society 128, no. 28 (July 2006): 9024–25. http://dx.doi.org/10.1021/ja0627444.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Lazzaro, M. A., and O. O. Zaidat. "Multimodal endovascular reperfusion therapies: Adjunctive antithrombotic agents in acute stroke." Neurology 78, no. 7 (February 13, 2012): 501–6. http://dx.doi.org/10.1212/wnl.0b013e318246d6a5.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Martin, A. C., I. Gouin-Thibault, V. Siguret, A. Mordohay, C. M. Samama, P. Gaussem, B. Le Bonniec, and A. Godier. "Multimodal assessment of non-specific hemostatic agents for apixaban reversal." Journal of Thrombosis and Haemostasis 13, no. 3 (February 5, 2015): 426–36. http://dx.doi.org/10.1111/jth.12830.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Deravi, F., M. C. Fairhurst, R. M. Guest, N. J. Mavity, and A. M. D. Canuto. "Intelligent agents for the management of complexity in multimodal biometrics." Universal Access in the Information Society 2, no. 4 (November 1, 2003): 293–304. http://dx.doi.org/10.1007/s10209-002-0039-1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Teraphongphom, Nutte, Peter Chhour, John R. Eisenbrey, Pratap C. Naha, Walter R. T. Witschey, Borirak Opasanont, Lauren Jablonowski, David P. Cormode, and Margaret A. Wheatley. "Nanoparticle Loaded Polymeric Microbubbles as Contrast Agents for Multimodal Imaging." Langmuir 31, no. 43 (October 16, 2015): 11858–67. http://dx.doi.org/10.1021/acs.langmuir.5b03473.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Tuominen, Eva, Marjatta Kangassalo, Pentti Hietala, Roope Raisamo, and Kari Peltola. "Proactive Agents to Assist Multimodal Explorative Learning of Astronomical Phenomena." Advances in Human-Computer Interaction 2008 (2008): 1–13. http://dx.doi.org/10.1155/2008/387076.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This paper focuses on developing, testing, and examining the Proagents multimodal learning environment to support blind children's explorative learning in the area of astronomy. We utilize haptic, auditive, and visual interaction. Haptic and auditory feedbacks make the system accessible to blind children. The system is used as an exploration tool for children's spontaneous and question-driven explorations. High-level interaction and play are essential with environments for young children. Proactive agents support and guide children to deepen their explorations and discover the central concepts and relations in phenomena. It has been challenging to integrate together in a pedagogically relevant way the explorative learning approach, proactive agents' actions, haptic perception's possibilities, and the selected astronomical phenomena. Our tests have shown that children are very interested in using the system and the operations of the agents.
50

Poppe, Ronald, Ronald Böck, Francesca Bonin, Nick Campbell, Iwan de Kok, and David Traum. "From multimodal analysis to real-time interactions with virtual agents." Journal on Multimodal User Interfaces 8, no. 1 (March 2014): 1–3. http://dx.doi.org/10.1007/s12193-014-0152-5.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

До бібліографії