Teses / dissertações sobre o tema "Agents multimodaux"
Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos
Veja os 50 melhores trabalhos (teses / dissertações) para estudos sobre o assunto "Agents multimodaux".
Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.
Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.
Veja as teses / dissertações das mais diversas áreas científicas e compile uma bibliografia correta.
Chaker, Walid. "Modélisation multi-échelle d'environnements urbains peuplés : application aux simulations multi-agents des déplacements multimodaux". Thesis, Université Laval, 2009. http://www.theses.ulaval.ca/2009/26481/26481.pdf.
Texto completo da fonteAbrilian, Sarkis. "Représentation de comportements emotionnels multimodaux spontanés : perception, annotation et synthèse". Phd thesis, Université Paris Sud - Paris XI, 2007. http://tel.archives-ouvertes.fr/tel-00620827.
Texto completo da fonteGallouedec, Quentin. "Toward the generalization of reinforcement learning". Electronic Thesis or Diss., Ecully, Ecole centrale de Lyon, 2024. http://www.theses.fr/2024ECDL0013.
Texto completo da fonteConventional Reinforcement Learning (RL) involves training a unimodal agent on a single, well-defined task, guided by a gradient-optimized reward signal. This framework does not allow us to envisage a learning agent adapted to real-world problems involving diverse modality streams, multiple tasks, often poorly defined, sometimes not defined at all. Hence, we advocate for transitioning towards a more general framework, aiming to create RL algorithms that more inherently versatile.To advance in this direction, we identify two primary areas of focus. The first aspect involves improving exploration, enabling the agent to learn from the environment with reduced dependence on the reward signal. We present Latent Go-Explore (LGE), an extension of the Go-Explore algorithm. While Go-Explore achieved impressive results, it was constrained by domain-specific knowledge. LGE overcomes these limitations, offering wider applicability within a general framework. In various tested environments, LGE consistently outperforms the baselines, showcasing its enhanced effectiveness and versatility. The second focus is to design a general-purpose agent that can operate in a variety of environments, thus involving a multimodal structure and even transcending the conventional sequential framework of RL. We introduce Jack of All Trades (JAT), a multimodal Transformer-based architecture uniquely tailored to sequential decision tasks. Using a single set of weights, JAT demonstrates robustness and versatility, competing its unique baseline on several RL benchmarks and even showing promising performance on vision and textual tasks. We believe that these two contributions are a valuable step towards a more general approach to RL. In addition, we present other methodological and technical advances that are closely related to our core research question. The first is the introduction of a set of sparsely rewarded simulated robotic environments designed to provide the community with the necessary tools for learning under conditions of low supervision. Notably, three years after its introduction, this contribution has been widely adopted by the community and continues to receive active maintenance and support. On the other hand, we present Open RL Benchmark, our pioneering initiative to provide a comprehensive and fully tracked set of RL experiments, going beyond typical data to include all algorithm-specific and system metrics. This benchmark aims to improve research efficiency by providing out-of-the-box RL data and facilitating accurate reproducibility of experiments. With its community-driven approach, it has quickly become an important resource, documenting over 25,000 runs.These technical and methodological advances, along with the scientific contributions described above, are intended to promote a more general approach to Reinforcement Learning and, we hope, represent a meaningful step toward the eventual development of a more operative RL agent
Kamoun, Mohamed. "Conception d’un système d’information pour l’aide au déplacement multimodal : une approche multi-agents pour la recherche et la composition des itinéraires en ligne". Ecole Centrale de Lille, 2007. http://tel.archives-ouvertes.fr/docs/00/14/28/46/PDF/these_kamoun.pdf.
Texto completo da fonteTo plan his travel, a traveller has to consult several web sites of different public transport operators. To avoid this time consuming task, this work consists in conceiving a Mobility Cooperative information system (SICM) providing a multi-modal and a multi-operators travel information. This integration system automates the itineraries search and the multi-operators routes composition. Its design is based on the multiagent system theory (MAS). The SICM tries, in fact, to make the existing operators’ information systems cooperating efficiently together, so that it can provide users with the optimized route to follow, by compiling the needed information from the different operators information sources. In this approach, the SICM is a middleware which becomes a customer among other users of the existing information systems. It can be considered as a mediator between the various distributed information sources on the one hand and the travellers on the other hand. The system should be able, at the same time, to find the needed information sources which are able to answer an itinerary request, and to gather this information in a coherent way to compose an optimized itinerary. To provide an optimized route, according to the criterion of the user, distributed and time-dependent shortest path algorithms were adopted and adapted to realize an on-line itinerary composition
Bangalore, Kantharaju Reshmashree. "Modelling Cohesive Behaviours for Virtual Agents in Multiparty Interactions". Electronic Thesis or Diss., Sorbonne université, 2021. http://www.theses.fr/2021SORUS230.
Texto completo da fonteGroup interactions are a commonly used form of communication among humans. Often the members of a group are involved in discussing, making decisions and exchanging ideas, under different settings (e.g., meeting, conference, party etc.). Group Cohesion describes the shared bond that drives the members to stay together and to want to work together to achieve group goals. In group interactions, humans communicate and coordinate with each other via a number of verbal and nonverbal behaviours. In this research work, as a first step we recognise the relation between group cohesion and certain non-verbal social signals of interest. Next, we present the results on automatic estimation of cohesion levels in groups using different features and feature representation techniques for groups. Virtual agents, a computer-generated animated character with human-like non-verbal behaviours, have been widely used for human-computer interactions in various applications e.g., educational agents, health coaches, training assistants etc. Most of the applications so far have focused on developing agents for dyadic interactions i.e., a single agent and user. A group of agents (multiparty) can be potentially effective in persuading, motivating and educating the users through interactive discussions. In the next step, we develop a multiparty model involving multiple autonomous agents that are capable of displaying cohesive group behaviour i.e., shared commitment to group tasks and positive relationship among the agents. Considering the surge in the range of applications using virtual agents, it is important to study the interactions between multiple agents and the user and understand the effects of using such a system. We hypothesise that the use of a multi-agent system would allow the user to be more engaged in the discussion and provide different perspectives on the same issue and facilitate the users to make informed decisions. Therefore, in the final step we conduct multiple user evaluation studies to understand the effects of multiparty interactions on the user and their perceptions e.g., the level of trust, persuasion. We present insights into the most effective form of interactions for promoting behaviour change or persuading the user using different group conversational topics. To summarise, in this thesis we recognise the association between certain non-verbal social signals and group cohesion, present the estimation accuracy using features extracted from these signals, develop a multiparty model to simulate a cohesive group of agents displaying prominent social signals and finally evaluate the effectiveness of such a model in the context of behaviour change and its effects on user’s perceptions
Fragoso, Ygara Lúcia Souza Melo. "Guibuilder multimodal : um framework para a geração de interfaces multimodais com o apoio de interaction design patterns". Universidade Federal de São Carlos, 2012. https://repositorio.ufscar.br/handle/ufscar/7641.
Texto completo da fonteApproved for entry into archive by Ronildo Prado (ronisp@ufscar.br) on 2016-10-04T18:00:12Z (GMT) No. of bitstreams: 1 DissYLSM.pdf: 5457505 bytes, checksum: 5b644703e5553e5b7a06e6ad6e976868 (MD5)
Approved for entry into archive by Ronildo Prado (ronisp@ufscar.br) on 2016-10-04T18:00:21Z (GMT) No. of bitstreams: 1 DissYLSM.pdf: 5457505 bytes, checksum: 5b644703e5553e5b7a06e6ad6e976868 (MD5)
Made available in DSpace on 2016-10-04T18:13:36Z (GMT). No. of bitstreams: 1 DissYLSM.pdf: 5457505 bytes, checksum: 5b644703e5553e5b7a06e6ad6e976868 (MD5) Previous issue date: 2012-11-01
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
The interaction between humans and the computers has improved substantially during time through the evolution of interfaces of interaction. The possibility of users to interact with machines through several modalities of communication, and in a natural way, can increase the level of interest from the user and ensure the success of the application. However, the literature of the area of multimodality has shown that developing such interfaces is not a simple task, mainly for non-experienced or recently graduated professionals, since each designer’s modality of interaction has its complexity in technical terms, as acquisition and adaptation with new tools, languages, possible actions and etc. Moreover it is necessary to verify which modalities (voice, touch and gestures) can be used in the application, how to combine these modalities in a way that the stronger point of one completes the weak point of the other and vice versa, and also knowing in what context the final user will be involved. The GuiBuilder Multimodal was developed aiming to try providing the basic needs in implementing an interface that uses voice, touch and gesture. The framework promotes an interface development through the WYSIWYG (What You See Is What You Get) model, where the designer just sets some parameters so the component is multimodal. During the interface creation phase, agents supervise what the designer does and supply support, clues, with design patterns that might be divided in categories such as: multimodality, interaction with the user and components.
A interação entre humano e computador tem melhorado substancialmente ao longo do tempo através da evolução das interfaces de interação. A possibilidade de usuários interagirem com máquinas através de várias modalidades de comunicação, e de forma natural, pode aumentar o nível de interesse do usuário e garantir o sucesso da aplicação. Porém, o estado da arte da área de multimodalidade demonstra que desenvolver tais interfaces não é uma tarefa simples, principalmente para projetistas inexperientes ou recém formados, pois cada modalidade de interação tem sua complexidade em termos técnicos, como aquisição e adaptação com novas ferramentas, linguagens, ações possíveis e etc. Além disso, é preciso verificar quais modalidades (voz, toque e gestos) podem ser usadas na aplicação, como combinar essas modalidades de forma que o ponto forte de uma complemente o ponto fraco da outra e vice-versa e também saber em qual contexto o usuário final estará inserido. O GuiBuilder Multimodal foi desenvolvido com o intuito de tentar suprir as necessidades básicas em se implementar uma interface que utiliza voz, toque e gesto. O framework promove um desenvolvimento de interface através do modelo WYSIWYG (What You See Is What You Get) onde o projetista apenas define alguns parâmetros para que um componente seja multimodal. Durante a fase de criação da interface agentes supervisionam o que o designer faz e fornece um apoio, dicas, com padrões de projeto que podem ser divididos em categorias como: multimodalidade, interação com o usuário e componentes.
Bendal, Ove-Andre. "Integration of multimodal input by using agents". Thesis, Norwegian University of Science and Technology, Department of Computer and Information Science, 2005. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-9251.
Texto completo da fonteToday, user interfaces normally consist of a screen, and a pointing device and a keyboard for input. However, as more advanced technology and methods appears, there should be good chances to utilize these for more natural and effective human-computer interfaces. The main motivation is to get a more natural and easy to use interface, and the computer should understand the user without too much effort from the user. Intelligent interfaces could be a solution to achieve this goal. The main focus in this thesis, is multimodal input which combines different input modalities to achieve the user's goal. A framework has been designed where the user has the possibility to change between input modalities. The system should integrate the information given in different input modalities to one joint meaning. In this architecture, input could either be location or command input, and different modalities could be used for each input type. The example described later on in this thesis combines either speech or written text as command input, with either map input or physical position for location input. An agent-based blackboard architecture are used for collecting input. Agents collect information directly from the user. Each agent represent their own input modality, and is responsible to analyse input. As this is done, the agent send the information to a common blackboard which hold the latest information from each agent. An own agent which is responsible for fusing this information to one common meaning, collects the information from the blackboard and integrate it to one joint meaning. This joint interpretation decides what should be done to which object. Since the modalities are independent of each other, other modalities could easily be added with just small changes to other parts of the system as long as it is an command or location input which agrees to the currently representation structure.
Mancini, Maurizio. "Multimodal distinctive behavior for expressive embodied conversational agents". Paris 8, 2008. https://octaviana.fr/items/show/9956#?c=0&m=0&s=0&cv=0.
Texto completo da fonteEmbodied Conversational Agents are a new type of computer interfaces which have human-like bodies and conversational skills. Users interacting with agents will be more engaged and participative if the agents exhibit behaviors which look coherent across different situations and emotional states. In the present work, we aim at increasing agents believability by looking at two main aspects of the problem : - an agent must be able to show its emotional state and communicative intentions not only through specific facial expressions, gestures, etc. , but also by varying the quality of its movements (e. G. , their speed, amplitude, etc. ) and the choice of the modalities that are used to communicate ; - the agent must maintain a disctinctive behavior tendency has to remain apparent during any communication ; We have developed a model to solve the two above issues. We have evaluated the realism and believability of the resulting agent's behaviors through perceptual tests and application scenario. The resulting system is highly extensible and configurable. It can also be used as a research tool to study human communication
Gaciarz, Matthis. "Régulation de trafic urbain multimodal : une modélisation multi-agents". Thesis, Lyon, 2016. http://www.theses.fr/2016LYSE1281/document.
Texto completo da fonteSince several decades, urban congestion is more and more widespread and deteriorate the quality of life of citizens who live in cities. Several methods are used to reduce urban congestion, notably traffic regulation and promotion of public transportation. Since the 1990's, the usage of tools from artificial intelligence, particularly distributed systems and multi-agent systems, allowed to design new methods for traffic regulation. Indeed, these methods ease to take into account the complexity of traffic-related problems with distribution. Moreover, the improvement of the communication abilities of the vehicles and the coming of autonomous vehicles allow to consider new approaches for regulation.The research work presented in this work is twofold. First we propose a method for traffic regulation at an intersection based on automatic negotiation. Our method is based on an argumentation system describing the state of the traffic and the preferences of each vehicle, relying on reasonning methods for vehicles and infrastructures. In the second part of this thesis, we propose a coordination method for buses for the rest of the traffic. This method allows a bus to coordinate in an anticipatory way with the next intersections on its trajectory, in order to define a common regulation policy allowing the bus to reach its next stop without suffering from potential congestions
Kothapalli, Satya V. V. N. "Nano-Engineered Contrast Agents : Toward Multimodal Imaging and Acoustophoresis". Doctoral thesis, KTH, Medicinsk bildteknik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-172397.
Texto completo da fonteQC 20150827
3MiCRON
Kamoun, Mohamed Amine. "Conception d'un système d'information pour l'aide au déplacement multimodal : Une approche multi-agents pour la recherche et la composition des itinéraires en ligne". Phd thesis, Ecole Centrale de Lille, 2007. http://tel.archives-ouvertes.fr/tel-00142340.
Texto completo da fontePour produire l'information multimodale et multi opérateurs nécessaire à l'aide au déplacement, le SICM doit accéder aux différents systèmes d'information des opérateurs de transport et intégrer des résultats de recherche qui sont générés par les différents algorithmes des différents opérateurs. Dans cette approche, le SICM est un intergiciel (middleware) qui devient un client parmi d'autres usagers des systèmes d'information existants. Le SICM devient alors l'intermédiaire entre les différentes sources d'informations hétérogènes et distribuées d'une part et les clients d'autre part. Ce système doit être capable à la fois de trouver la bonne source d'information pour l'interroger selon les différentes requêtes des utilisateurs, et de regrouper les informations de manière cohérente pour répondre aux requêtes. Pour fournir un itinéraire composé mais surtout optimisé selon les critères de l'utilisateur, le recours à des algorithmes de plus courts chemins distribués « en ligne », et adaptés à des graphes dynamiques (dépendant du temps) a été retenu afin de réaliser ce moteur de recherche et de composition d'itinéraires multimodaux en ligne.
Sathiyajith, Cuhananthan Wijayanayagam. "Investigations towards new multidentate ligands as potential multimodal imaging agents". Thesis, Cardiff University, 2010. http://orca.cf.ac.uk/55044/.
Texto completo da fonteRehm, Matthias. "LOKATOR - Multimodale Bedeutungskonstitution in situierten Agenten". [S.l. : s.n.], 2001. http://deposit.ddb.de/cgi-bin/dokserv?idn=969431015.
Texto completo da fonteFragueiro, Oihane. "Developing nanoparticles as contrast agents for cell labelling and multimodal bioimaging". Thesis, University of Liverpool, 2018. http://livrepository.liverpool.ac.uk/3028423/.
Texto completo da fonteStocky, Thomas A. (Thomas August) 1978. "Conveying routes : multimodal generation and spatial intelligence in embodied conversational agents". Thesis, Massachusetts Institute of Technology, 2002. http://hdl.handle.net/1721.1/87833.
Texto completo da fonteIncludes bibliographical references (leaves 38-40).
by Thomas A. Stocky.
M.Eng.
Kilic, Nüzhet Inci. "Graphene Quantum Dots as Fluorescent and Passivation Agents for Multimodal Bioimaging". Thesis, KTH, Tillämpad fysik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-298302.
Texto completo da fonteSedan deras upptäckt har nolldimensionella kvantprickar av grafen (kol) uppmärksammats inom biorelaterade applikationer, särskilt för deras optiska egenskaper, kemiska stabilitet och enkelt modifierbara yta. Denna avhandling fokuserar på en grön syntesmetod av kvävedopade grafen-kvantprickar för bimodal bioavbildning med röntgenfluorescens och optisk fluorescens. Både konventionella och mikrovågs-assisterade solvotermiska syntesmetoder användes för att undersöka metodernas effekt på de syntetiserade kvantprickarna. Den mikrovågs-assisterade metoden möjliggjorde syntes av uniforma kvantprickar med exciteringsoberoende egenskaper på grund av mycket kontrollerbara reaktionsförhållanden. Det demonstrerades att den molekylära strukturen hos prekursorerna påverkade de optiska fluorescensegenskaperna hos grafen-kvantprickarna. Genom att välja specifika prekursorer erhölls kvantprickar som emitterar i både blått och rött ljus, motsvarande emissionsmaxima vid 438 respektive 605 nm under excitering vid 390 respektive 585 nm. Amin-funktionaliserade Rh-nanopartiklar valdes som en aktiv kärna för röntgenfluorescens, syntetiserad genom en mikrovågs-assisterad hydrotermisk metod med en specialdesignad sockerligand som reduktionsmedel. Dessa nanopartiklar konjugerades med blåemitterande kvantprickar genom EDC-NHS-behandling. De hybrida nanopartiklarna uppvisade grön emission (520 nm) under 490 nm excitation och ledde till en minskad cytotoxicitet uppmätt genom cellanalys i realtid (RTCA) jämfört med endast Rh-nanopartiklar, vilket framhävde passiveringsrollen som kvantprickarna spelar. Hybridkomplexet utgjorde ett multimodalt kontrastmedel för bioavbildning, vilket demonstrerades med konfokalmikroskopi (in vitro) och fantomexperiment med röntgenfluorescens.
CAPECE, SABRINA. "Fluoropentane-filled droplets as potential multimodal contrast agent". Doctoral thesis, Università degli Studi di Roma "Tor Vergata", 2013. http://hdl.handle.net/2108/202019.
Texto completo da fonteFayech, Besma. "Régulation des réseaux de transport multimodal : systèmes multi-agents et algorithmes évolutionnistes". Lille 1, 2003. https://pepite-depot.univ-lille.fr/LIBRE/Th_Num/2003/50376-2003-323.pdf.
Texto completo da fonteMihoub, Alaeddine. "Apprentissage statistique de modèles de comportement multimodal pour les agents conversationnels interactifs". Thesis, Université Grenoble Alpes (ComUE), 2015. http://www.theses.fr/2015GREAT079/document.
Texto completo da fonteFace to face interaction is one of the most fundamental forms of human communication. It is a complex multimodal and coupled dynamic system involving not only speech but of numerous segments of the body among which gaze, the orientation of the head, the chest and the body, the facial and brachiomanual movements, etc. The understanding and the modeling of this type of communication is a crucial stage for designing interactive agents capable of committing (hiring) credible conversations with human partners. Concretely, a model of multimodal behavior for interactive social agents faces with the complex task of generating gestural scores given an analysis of the scene and an incremental estimation of the joint objectives aimed during the conversation. The objective of this thesis is to develop models of multimodal behavior that allow artificial agents to engage into a relevant co-verbal communication with a human partner. While the immense majority of the works in the field of human-agent interaction (HAI) is scripted using ruled-based models, our approach relies on the training of statistical models from tracks collected during exemplary interactions, demonstrated by human trainers. In this context, we introduce "sensorimotor" models of behavior, which perform at the same time the recognition of joint cognitive states and the generation of the social signals in an incremental way. In particular, the proposed models of behavior have to estimate the current unit of interaction ( IU) in which the interlocutors are jointly committed and to predict the co-verbal behavior of its human trainer given the behavior of the interlocutor(s). The proposed models are all graphical models, i.e. Hidden Markov Models (HMM) and Dynamic Bayesian Networks (DBN). The models were trained and evaluated - in particular compared with classic classifiers - using datasets collected during two different interactions. Both interactions were carefully designed so as to collect, in a minimum amount of time, a sufficient number of exemplars of mutual attention and multimodal deixis of objects and places. Our contributions are completed by original methods for the interpretation and comparative evaluation of the properties of the proposed models. By comparing the output of the models with the original scores, we show that the HMM, thanks to its properties of sequential modeling, outperforms the simple classifiers in term of performances. The semi-Markovian models (HSMM) further improves the estimation of sensorimotor states thanks to duration modeling. Finally, thanks to a rich structure of dependency between variables learnt from the data, the DBN has the most convincing performances and demonstrates both the best performance and the most faithful multimodal coordination to the original multimodal events
Vaillant, Solenne. "Suivi in vivo de cellules immunitaires par imagerie multimodale". Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLS021/document.
Texto completo da fonteRecent clinical trial results have demonstrated the efficacy of immunotherapy in cancer patients. This type of therapy involves treating cancer cells by stimulating the patient's immune defenses. The aim of this thesis project is to develop a biomarker of efficacy for this therapy, in order to better understand the biological mechanisms involved, and to have an early and non-invasive indicator of the patient’s response to immunotherapy. To do this, two imaging techniques (MRI and PET) were used as in vivo monitoring tools for the biodistribution of different populations of immune cells. The first step of this work was to establish different protocols for labeling immune cells. For the PET approach, the immune cells were labeled with Zirconium 89; and for MRI, two labeling techniques were studied: the first uses iron nanoparticles, and the other uses micelles loaded with Fluorine 19. After validation of their non-toxicity, the sensitivity of each labeling was evaluated in vitro, then in vivo in a second step, thus making it possible to study the biodistribution of the immune cells after different types of injections. The labeling with Zirconium 89 was then tested on different animal models of immunotherapies (PD1/PDL1 for example). Finally, since direct markings do not allow optimal cellular monitoring in the long term, a cell labeling approach using reporter genes has been considered. It involved modifying the genome of the immune cells so that they could express an enzyme (for example the viral thymidine kinase HSV1-TK) or a transporter (such as the NIS iodine transporter) allowing the internalization of a radioactive tracer in vivo, and thus be able to carry out indirect labeling of the cells
Pigaleva, Anastacia, e Sally Sandberg. "Katter på Instagram - Kändisskap i djurformat : En multimodal analys av djurs förmänskligande på sociala medier". Thesis, Karlstads universitet, Institutionen för sociala och psykologiska studier (from 2013), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-81585.
Texto completo da fonteRoberts, Geraint Rhys Dafydd. "Superparamagnetic iron oxide nanoparticles : foundations for novel bioconjugate species and multimodal contrast agents". Thesis, Cardiff University, 2018. http://orca.cf.ac.uk/110803/.
Texto completo da fonteChu, Amy 1980. "Mutual disambiguation of recognition errors in a multimodal navigational agent". Thesis, Massachusetts Institute of Technology, 2003. http://hdl.handle.net/1721.1/87392.
Texto completo da fonteBeskow, Jonas. "Talking Heads - Models and Applications for Multimodal Speech Synthesis". Doctoral thesis, KTH, Tal, musik och hörsel, 2003. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-3561.
Texto completo da fonteQC 20100506
Roa, Seïler Néna. "Towards an emotionally intelligent interaction strategy for multimodal embodied conversational agents acting as companions". Thesis, Edinburgh Napier University, 2015. http://researchrepository.napier.ac.uk/Output/462318.
Texto completo da fonteLarsson, Malin. "Toward increased applicability of ultrasound contrast agents". Doctoral thesis, KTH, Medicinsk bildteknik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-163387.
Texto completo da fonteQC 20150401
Ljungkvist, Ida-Marie, e Pauline Gröning. "“Har man aldrig tränat på att rita en varg, så hur ska man veta hur man ska rita en varg? Precis som att skriva, så hur ska man veta hur man skriver?” : En studie om hur några lärare verksamma i årskurs 1–3 resonerar om visuella representationers agens i svenskundervisningen". Thesis, Högskolan Kristianstad, Fakulteten för lärarutbildning, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:hkr:diva-21989.
Texto completo da fonteTrajano, Izabella da Silva Negrão. "A imagem como agente de representação social e ideológica no discurso multimodal". reponame:Repositório Institucional da UnB, 2013. http://repositorio.unb.br/handle/10482/15181.
Texto completo da fonteSubmitted by Albânia Cézar de Melo (albania@bce.unb.br) on 2014-01-22T14:38:25Z No. of bitstreams: 1 2013_IzabellaSilvaNegraoTrajano.pdf: 6123905 bytes, checksum: 210e3760904112407f31b82491fb05e6 (MD5)
Approved for entry into archive by Guimaraes Jacqueline(jacqueline.guimaraes@bce.unb.br) on 2014-02-14T14:52:36Z (GMT) No. of bitstreams: 1 2013_IzabellaSilvaNegraoTrajano.pdf: 6123905 bytes, checksum: 210e3760904112407f31b82491fb05e6 (MD5)
Made available in DSpace on 2014-02-14T14:52:36Z (GMT). No. of bitstreams: 1 2013_IzabellaSilvaNegraoTrajano.pdf: 6123905 bytes, checksum: 210e3760904112407f31b82491fb05e6 (MD5)
A pesquisa intitulada “A imagem como agente de representação social e ideológica no discurso multimodal” tem como objetivo investigar a função semiótica de imagens como agentes de construção de representações sociais e ideológicas, como formas simbólicas mediadas pelos meios de comunicação de massa, influenciando diretamente na produção de significados do discurso. O estudo defende a tese de que “a imagem desempenha um papel de agente na construção de representações sociais e ideológicas de estruturas sociais, por meio de múltiplas articulações semióticas no discurso multimodal”. Para alcançar esse objetivo, discursos multimodais de uma marca e de um produto foram selecionados para ser investigados em três segmentos diferentes de análises: análise textual e discursiva, análise social e análise multimodal. Essas diferentes análises também tiveram como objetivo responder aos três questionamentos que nortearam a pesquisa: (1) Qual a relevância do estudo da imagem como agente de representações sociais e ideológicas no discurso multimodal? (2) Como as imagens atuam na construção dessas representações marcadas nos discursos como modo de produção de significados? (3) Qual é o papel da imagem no discurso multimodal para uma nova perspectiva do letramento? A pesquisa enquadra-se no perfil da investigação qualitativa, uma vez que demanda uma visão mais holística do processo de pesquisa social, com diferentes metodologias, contribuindo para a sua sustentação e embasamento. (BAUER; GASKELL; ALLUM, 2008, p. 26). Fundamentada nas teorias da Análise de Discurso Crítica (FAIRCLOUGH, 1992, 1995, 2003, 2006, 2010, 2012), da Semiótica Social (KRESS; VAN LEEUWEN, [1996], 2006), da Multimodalidade (KRESS, 2010; KRESS; VAN LEEUWEN, 2001), da Ideologia (THOMPSON, [1995], 2009) e dos ‘Multiletramentos’ (ROJO, 2012), evidenciamos que a imagem desempenha, com todo o seu potencial semiótico, importante papel de agência no discurso e, por esse motivo, merece ser estudada e ser reconhecida como um modo semiótico capaz de produzir significados. Buscar esse reconhecimento, hoje, no mundo moderno, é reconhecer que esse mesmo mundo está cada vez mais multimodal e, por isso, outros modos semióticos de comunicação e de linguagem são exigidos pela sociedade globalizada, demandando o conhecimento de novos letramentos (multiletramentos) e da prática deles no ambiente escolar, a fim de possibilitar aos estudantes (leitores) condições reais para o entendimento do discurso por meio de diversos recursos semióticos. Ao analisar as campanhas publicitárias, ora discursos multimodais, foi possível evidenciar que os anunciantes de uma marca ou produto, conscientes da agência das imagens, empregam tal modo semiótico com sutileza e criatividade na tentativa de conquistar o seu público-alvo e dessa forma auxiliam na construção da representação social e ideológica dessas estruturas sociais. ______________________________________________________________________________ ABSTRACT
The entitled research “The image as an agent of social and ideological representations into multimodal discourses”, aims to investigate about the semiotic function of images as agents of construction of social and ideological representations in multimodal discourses as symbolic forms mediated by the mass media, influencing straightly on the discourse eaning making. The study supports the thesis that “the image plays a role of an agent in the construction of social and ideological representations of social structures, through multiple semiotic articulations into the multimodal discourse”. To reach this goal, multimodal discourses of a brand and a product were selected for three different segments of analyses: textual and discursive analysis, social analysis and multimodal analysis. These different ways of analyses also aimed to answer to the three questions which directed the research: 1) What is the relevance of the image as an agent of social and ideological representations into multimodal discourses? 2) How do images act in the construction of these representations marked in discourses as a mode of meaning making? 3) What is the roll of the image into the multimodal discourse for a new perspective of literacy? The research makes part of a quality study, which different methodologies help to sustain and to base (BAUER, GASKELL e ALLUM, 2008, p. 26). Based on theories such as Critical Discourse Analysis (FAIRCLOUGH, 1992, 1995, 2003, 2006, 2010, 2012), Social Semiotics (KRESS e VAN LEEUWEN, [1996], 2006), Multimodality (KRESS, 2010; KRESS e VAN LEEUWEN, 2001), Ideology (THOMPSON, [1995], 2009) and ‘Multiliteracies’ (ROJO, 2012), we evidenced that the images do play, with its all semiotic potential, an important role as agents into the discourse and because of this they deserve to be investigated and acknowledged as semiotic modes which are as capable to make meanings, as the verbal language is. Requesting that acknowledgement, in this modern world, means that we recognize this world has become more and more multimodal and because of that other communicational and language semiotic resources are required by this globalized society, demanding a knowledge of new literacies (‘multiliteracies’) and also their practice into the scholar field, to enable real possibilities to the students (readers) in order to make them understand the discourse through a range of semiotic resources. Analyzing such advertisements, multimodal discourses, was possible to perceive that brand or product advertisers, who are aware of the agency of images, use such semiotic resources with acuity and creativity in the attempt of attracting their audience and doing that, help to construct the social and ideological representations of these social structures.
Fares, Mireille. "Multimodal Expressive Gesturing With Style". Electronic Thesis or Diss., Sorbonne université, 2023. http://www.theses.fr/2023SORUS017.
Texto completo da fonteThe generation of expressive gestures allows Embodied Conversational Agents (ECA) to articulate the speech intent and content in a human-like fashion. The central theme of the manuscript is to leverage and control the ECAs’ behavioral expressivity by modelling the complex multimodal behavior that humans employ during communication. The driving forces of the Thesis are twofold: (1) to exploit speech prosody, visual prosody and language with the aim of synthesizing expressive and human-like behaviors for ECAs; (2) to control the style of the synthesized gestures such that we can generate them with the style of any speaker. With these motivations in mind, we first propose a semantically aware and speech-driven facial and head gesture synthesis model trained on the TEDx Corpus which we collected. Then we propose ZS-MSTM 1.0, an approach to synthesize stylized upper-body gestures, driven by the content of a source speaker’s speech and corresponding to the style of any target speakers, seen or unseen by our model. It is trained on PATS Corpus which includes multimodal data of speakers having different behavioral style. ZS-MSTM 1.0 is not limited to PATS speakers, and can generate gestures in the style of any newly coming speaker without further training or fine-tuning, rendering our approach zero-shot. Behavioral style is modelled based on multimodal speakers’ data - language, body gestures, and speech - and independent from the speaker’s identity ("ID"). We additionally propose ZS-MSTM 2.0 to generate stylized facial gestures in addition to the upper-body gestures. We train ZS-MSTM 2.0 on PATS Corpus, which we extended to include dialog acts and 2D facial landmarks
Potdevin, Delphine. "Vers des agents conversationnels animés sociaux : Quelle influence de l'intimité virtuelle sur l'expérience utilisateur et la relation-client ?" Thesis, université Paris-Saclay, 2020. http://www.theses.fr/2020UPASW004.
Texto completo da fonteEmbodied Conversational Agents (ECAs) are increasingly present in our daily lives and are gradually finding a place in our habits. These expert systems have professional skills in a wide range of domains (banking, ensurrance, health, education). However, ECAs are still suffering from a lack of adoption by users, who tire of them very quickly or just refuse to use them. One explaination is that profesionnal skills alone are not enought to satisfy users and social skills play an important role in the customer-relationship.At the crossroads of social psychology, affective computing and ergonomics, this thesis aims to explore the impact of socioemotional skills of ECAs on user experience (UX) and the customer-relationship.For its central role in human relationships, but also for its contribution to the sense of social presence and to the building of the customer-relationship, we chose to focus on one particular social skill: intimacy. We developed a theoretical model of virtual intimacy for ECAs, inspired by the literature in human psychology. First, we confirmed the validity of our model in a series of studies investigating the perception of virtual intimacy of observers with respect to interactions between a virtual tourism counselor expressing intimate multimodal behaviors and a tourist. Our results show that our virtual counselor is capable of generating as much intimacy as a human and that the perception of virtual intimacy is sensitive to different regulating factors (interactions properties, individual caracteristics).Second, we explored the perceptions, behaviors and experiences of real tourists in real interaction situations with an autonomous version of our intimate virtual counselor.Our results identify virtual intimacy as a potential candidate to foster the social dimension of human-agent interactions and move toward a better UX. They open up new opportunities to allow ACAs to become genuine social partners
Janssoone, Thomas. "Analyse de signaux sociaux multimodaux : application à la synthèse d’attitudes sociales chez un agent conversationnel animé". Electronic Thesis or Diss., Sorbonne université, 2018. http://www.theses.fr/2018SORUS607.
Texto completo da fonteDuring an interaction, non-verbal behavior reflects the emotional state of the speaker, such as attitude or personality. Modulations in social signals tell about someone's affective state like variations in head movements, facial expressions or prosody. Nowadays, machines can use embodied conversational agents to express the same kind of social cues. Thus, these agents can improve the quality of life in our modern societies if they provide natural interactions with users. Indeed, the virtual agent must express different attitudes according to its purpose, such as dominance for a tutor or kindness for a companion. Literature in sociology and psychology underlines the importance of the dynamic of social signals for the expression of different affective states. Thus, this thesis proposes models focused on temporality to express a desired affective phenomenon. They are designed to handle social signals that are automatically extracted from a corpus. The purpose of this analysis is the generation of embodied conversational agents expressing a specific stance. A survey of existing databases lead to the design of a corpus composed of presidential addresses. The high definition videos allow algorithms to automatically evaluate the social signals. After a corrective process of the extracted social signals, an agent clones the human's behavior during the addresses. This provides an evaluation of the perception of attitudes with a human or a virtual agent as a protagonist. The SMART model use sequence mining to find temporal association rules in interaction data. It finds accurate temporal information in the use of social signals and links it with a social attitude. The structure of these rules allows an easy transposition of this information to synthesize the behavior of a virtual agent. Perceptual studies validate this approach. A second model, SSN, designed during an international collaboration, is based on deep learning and domain separation. It allows multi-task learning of several affective phenomena and proposes a method to analyse the dynamics of the signals used. These different contributions underline the importance of temporality for the synthesis of virtual agents to improve the expression of certain affective phenomena. Perspectives give recommendation to integrate this information into multimodal solutions
He, Jiefang. "Design & Synthesis of Enzyme Responsive Contrast Agents For MRI & Optical Imaging". Thesis, Paris 11, 2012. http://www.theses.fr/2012PA112280.
Texto completo da fonteOver the last decade, medical imaging has evolved into one of the most powerful technique in diagnostic clinical medicine and biomedical researches. With the development of molecular imaging responsive probes allowing multimodal imaging of molecular events are then required.In this work, we present the design and the synthesis of lanthanide complexes with the aim of developing smart contrast agents for the detection of enzyme activity by MRI (T1 / CEST) and Optical Imaging. The designed complexes are built around a macrocyclic lanthanide chelate appended with an amino pyridine which is linked to an enzyme-sensitive trigger (e.g. galactoside) via a self-immolative linker. The latter is supposed to modify temporarily and in an enzyme dependent way the magnetic and photo-physical properties of the complex. The concept was first validated on a model compound without trigger. Although no difference of relaxivity was observed between models of the enzyme-activated and non-activated forms precluding the use in T1-MRI, different paraCEST effects were observed and found dependent on the lanthanide. Moreover, a previously unknown CEST effect was assigned to a carbamate function. Preliminary photo-physical studies showed also a different behavior of the two forms and confirmed the potentiality of these complexes as enzyme responsive bimodal contrast agent. The synthesis of the enzyme-responsive probe has been attempted by three different pathways and was finally achieved in a thirteen-step process which remained to be optimized. A “structure activity” relationship study has been initiated with the synthesis of positional isomers on the pyridine of the model compound
Malik, Muhammad Usman. "Learning multimodal interaction models in mixed societies A novel focus encoding scheme for addressee detection in multiparty interaction using machine learning algorithms". Thesis, Normandie, 2020. http://www.theses.fr/2020NORMIR18.
Texto completo da fonteHuman -Agent Interaction and Machine learning are two different research domains. Human-agent interaction refers to techniques and concepts involved in developing smart agents, such as robots or virtual agents, capable of seamless interaction with humans, to achieve a common goal. Machine learning, on the other hand, exploits statistical algorithms to learn data patterns. The proposed research work lies at the crossroad of these two research areas. Human interactions involve multiple modalities, which can be verbal such as speech and text, as well as non-verbal i.e. facial expressions, gaze, head and hand gestures, etc. To mimic real-time human-human interaction within human-agent interaction,multiple interaction modalities can be exploited. With the availability of multimodal human-human and human-agent interaction corpora, machine learning techniques can be used to develop various interrelated human-agent interaction models. In this regard, our research work proposes original models for addressee detection, turn change and next speaker prediction, and finally visual focus of attention behaviour generation, in multiparty interaction. Our addressee detection model predicts the addressee of an utterance during interaction involving more than two participants. The addressee detection problem has been tackled as a supervised multiclass machine learning problem. Various machine learning algorithms have been trained to develop addressee detection models. The results achieved show that the proposed addressee detection algorithms outperform a baseline. The second model we propose concerns the turn change and next speaker prediction in multiparty interaction. Turn change prediction is modeled as a binary classification problem whereas the next speaker prediction model is considered as a multiclass classification problem. Machine learning algorithms are trained to solve these two interrelated problems. The results depict that the proposed models outperform baselines. Finally, the third proposed model concerns the visual focus of attention (VFOA) behaviour generation problem for both speakers and listeners in multiparty interaction. This model is divided into various sub-models that are trained via machine learning as well as heuristic techniques. The results testify that our proposed systems yield better performance than the baseline models developed via random and rule-based approaches. The proposed VFOA behavior generation model is currently implemented as a series of four modules to create different interaction scenarios between multiple virtual agents. For the purpose of evaluation, recorded videos for VFOA generation models for speakers and listeners, are presented to users who evaluate the baseline, real VFOA behaviour and proposed VFOA models on the various naturalness criteria. The results show that the VFOA behaviour generated via the proposed VFOA model is perceived more natural than the baselines and as equally natural as real VFOA behaviour
Ho, Chih-Yuan. "Multimodal Interfaces: Supporting Synchronous Distributed Multi-Agent Communication and Coordination in Complex Domains". The Ohio State University, 2004. http://rave.ohiolink.edu/etdc/view?acc_num=osu1380545941.
Texto completo da fonteLarivière, Mélusine. "Nanoparticles functionalized with human antibodies for multimodal molecular imaging of atherosclerosis". Thesis, Bordeaux, 2016. http://www.theses.fr/2016BORD0389/document.
Texto completo da fonteBecause cardiovascular diseases are the leading cause of death in the world, providing clinicians with reliable and straightforward imaging techniques to identify "vulnerable" patients from the general population appears like the Holy Grail of the cardiovascular field. Atherosclerosis, identified as the underlying condition for most acute cardiovascular events, is characterized by the constitution of a lipidrich atheroma plaque, driven both by excess cholesterol and inflammation, which eventual rupture triggers clotting into the blood flow. It involves a wealth of cellular and molecular actors, which are so many potential markers for molecular imaging, aiming at deciphering how to warn clinicians about the possible occurrence of myocardial infarction or stroke. Here, human antibodies (HuAbs) selected by phage-display for their recognition of over-expressed biomarkers of the pathology are proposed as targeting ligands. They were further engineered for site-specific grafting, either by introducing Cysteine or Sortase recognition tags, and used to target contrast agents for MRI, fluorescence, or PET imaging. In vitro and ex vivo validation studies were carried out on atheroma sections of animal models. In vivo studies in the ApoE-/- mouse model were realized with the anti-platelet TEG4 HuAb using MRI, which provided insights on the biological relevance and feasibility to detect platelets-rich, high-risk atheroma plaques. The development of contrast agents useful in multi-modality imaging, and multi-functionalized with HuAbs is underway. It should serve as an accurate molecular imaging method for atherosclerosis, further more easily translated into the clinical arena
Zidi, Kamel. "Système Interactif d'Aide au Déplacement Multimodal (SIADM)". Phd thesis, Ecole Centrale de Lille, 2006. http://tel.archives-ouvertes.fr/tel-00142159.
Texto completo da fonteLa modélisation du réseau de transport est représentée par une architecture multi-zones . Cette architecture nous montre l'aspect distribué du système, les interactions et les relations qui peuvent avoir lieu entre les différenttes zones. Nous présentons dans ce travail un Système Multi-Agent d'Aide au Déplacement, SMAAD. Les agents de ce système utilisent le module d'optimisation développé dans la première partie. Notre travail est réalisé dans le cadre du projet « VIATIC-MOBILITE », qui est le projet 6 du pôle de compétitivité I-Trans.
Feki, Mohamed Firas. "Optimisation distribuée pour la recherche des itinéraires multi-opérateurs dans un réseau de transport co-modal". Phd thesis, Ecole Centrale de Lille, 2010. http://tel.archives-ouvertes.fr/tel-00604509.
Texto completo da fonteHalvars, Franzén Olle. "MED VISUELL UTGÅNGSPUNKT : En undersökning av en metodprövande och multimodal animations-workshop". Thesis, Konstfack, Institutionen för Bildpedagogik (BI), 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:konstfack:diva-3799.
Texto completo da fonteDoak, Lauran. "Exploring the multimodal communication and agency of children in an autism classroom". Thesis, Sheffield Hallam University, 2018. http://shura.shu.ac.uk/24008/.
Texto completo da fonteGruwell, Leigh C. "Multimodal Feminist Epistemologies: Networked Rhetorical Agency and the Materiality of Digital Composing". Miami University / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=miami1436810721.
Texto completo da fonteDemse, Habtesilase Ketema. "Challenges of Multimodal Transport Services:The Case of Ethiopian Shipping and Logistics Service Enterprise : Ethiopia- Sweden-Denmark and UK trade routes operation". Thesis, Linnéuniversitetet, Institutionen för marknadsföring (MF), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-79286.
Texto completo da fonteJones, Alistair. "Co-located collaboration in interactive spaces for preliminary design". Phd thesis, Université de Technologie de Compiègne, 2013. http://tel.archives-ouvertes.fr/tel-01067774.
Texto completo da fonteHivin, Ludovic F. "Sustainability of multimodal intercity transportation using a hybrid system dynamics and agent-based modeling approach". Diss., Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/53089.
Texto completo da fonteYang, Liu. "Modelling interruptions in human-agent interaction". Electronic Thesis or Diss., Sorbonne université, 2023. https://accesdistant.sorbonne-universite.fr/login?url=https://theses-intra.sorbonne-universite.fr/2023SORUS611.pdf.
Texto completo da fonteInterruptions play a significant role in shaping human communication, occurring frequently in everyday conversations. They serve to regulate conversation flow, convey social cues, and promote shared understanding among speakers. Human communication involves a range of multimodal signals beyond just speech. Verbal and non-verbal modes of communication are intricately intertwined, conveying semantic and pragmatic content while tailoring the communication process. The vocal mode incorporates acoustic features, such as prosody, while the visual mode encompasses facial expressions, hand gestures, and body language. The rise of virtual and online communication has necessitated the development of expressive communication for human-like embodied agents, including Embodied Conversational Agents (ECA) and social robots. To foster seamless and natural interactions between humans and virtual agents, it is crucial to equip virtual agents with the ability to handle interruptions during interactions. This manuscript focuses on studying interruptions in human-human interactions and enabling ECAs to interrupt human users during conversations. The primary objectives of this research are twofold: (1) in human-human interaction, analysis of acoustic and visual signals to categorise interruption type and detect when interruptions occur; (2) endow ECA with the capability to predict when to interrupt and generate its multimodal behaviour. To achieve these goals, we propose an annotation schema for identifying and classifying smooth turn exchanges, backchannels, and different interruption types. We manually annotate exchanges in two corpora, a part of the AMI corpus and the French section of the NoXi corpus. After analysing multimodal non-verbal signals, we introduce MIC, an approach to classify the interruption type based on selected non-verbal signals (facial expression, prosody, head and hand motion) from both interlocutors (the interruptee and the interrupter). We also introduce One-PredIT, which utilises a one-class classifier to identify potential interruption points by monitoring the real-time non-verbal behaviour of the current speaker (only interruptee). Additionally, we propose AI-BGM, a generative model to compute the facial expressions and head rotations of ECAs when it is interrupting. Given the limited amount of data at our disposal, we employ transfer learning technology to train our interruption behaviour generation model using the well-trained Augmented Self-Attention Pruning neural network model
Garcia, Geoffrey. "Une approche logicielle du traitement de la dyslexie : étude de modèles et applications". Thesis, Clermont-Ferrand 2, 2015. http://www.theses.fr/2015CLF22634/document.
Texto completo da fonteNeuropsychological disorders are widespread and generate real public health problems. In particular in our modern society, where written communication is ubiquitous, dyslexia can be extremely disabling. Nevertheless we can note that the diagnosis and remediation of this pathology are fastidious and lack of standardization. Unfortunately it seems inherent to the clinical characterization of dyslexia by exclusion, to the multitude of different practitioners involved in such treatment and to the lack of objectivity of some existing methods. In this respect, we decided to investigate the possibilities offered by modern computing to overcome these barriers. Indeed we have assumed that the democratization of computer systems and their computing power could make of them a perfect tool to alleviate the difficulties encountered in the treatment of dyslexia. This research has led us to study the techniques software as well as hardware, which can conduct to the development of an inexpensive and scalable system able to attend a beneficial and progressive changing of practices in this pathology field. With this project we put ourselves definitely in an innovative stream serving quality of care and aid provided to people with disabilities. Our work has been identifying different improvement areas that the use of computers enables. Then each of these areas could then be the subject of extensive research, modeling and prototype developments. We also considered the methodology for designing this kind of system as a whole. In particular our thoughts and these accomplishments have allowed us to define a software framework suitable for implementing a software platform that we called the PAMMA. This platform should theoretically have access to all the tools required for the flexible and efficient development of medical applications integrating business processes. In this way it is expected that this system allows the development of applications for caring dyslexic patients thus leading to a faster and more accurate diagnosis and a more appropriate and effective remediation. Of our innovation efforts emerge encouraging perspectives. However such initiatives can only be achieved within multidisciplinary collaborations with many functional, technical and financial means. Creating such a consortium seems to be the next required step to get a funding necessary for realizing a first functional prototype of the PAMMA, as well as its first applications. Some clinical studies may be conducted to prove undoubtedly the effectiveness of such an approach for treating dyslexia and eventually other neuropsychological disorders
Mejri, Hinda. "Un système d’aide à la régulation d’un réseau de transport multimodal perturbé : réponse au problème de congestion". Thesis, Ecole centrale de Lille, 2012. http://www.theses.fr/2012ECLI0008/document.
Texto completo da fonteTransport networks have been amplified by the increasing number of vehicles and stations and the emergence of new concepts essentially multimodal and intermodal. Thus, the task of managing public transport systems has become very complex and difficult for regulators.To cope with these difficulties, there is the development of systems decision support as an effective solution to traffic control. They can transmit real-time traffic information on transport networks. Our work is based on designing a control system of multimodal transport networks. It may be as an essential tool for effective solutions and real-time to the problem of traffic congestion. It can provide the necessary information to the user in making its decision to move with or without his car. The proposed system is a hybrid between a graph modeling the network and a multi-agent system. This will be supported by an evolutionary approach for generating an optimal control solution. This is justified by the open, distributed and complex network of multimodal transport
Paris, Jérémy. "Nanoparticules d'oxydes de fer et nanotubes de titanate pour l'imagerie multimodale et à destination de la thérapie anticancéreuse". Thesis, Dijon, 2013. http://www.theses.fr/2013DIJOS065/document.
Texto completo da fonteThe new implementations of nanoparticles in the medical field are one of the essential factors of the medical progress expected at the beginning of this XXIst century. Thus, the domain of the medical imaging is also affected by this technological evolution. This work consisted in developing theranostic probes with iron oxides nanoparticles (SPIO) and titanate nanotubes (TiONts) for multimodal imaging (magnetic/nuclear or magnetic/optical) and also possessing a therapeutic effect (hyperthermia/PDT or radiosensitization/PDT).The titanate nanotubes of this study have an average length of about 150 nm and were obtained by Kasuga's hydrothermal synthesis. These nanotubes present an outside diameter of about 10 nm and an intern cavity of 4 nm. On the other hand, iron oxides nanoparticles were synthesized by soft chemistry ("Massart" method). These spinel-like iron oxides nanoparticles have a crystallite size of 9 nm in diameter and exhibit a superparamagnetic behavior which was highlighted by FC / ZFC measurements.To get these nanoparticles ready to receive molecules of biological interest, two linkers of more reactive organic functions (APTES: NH2 or PHA: COOH) were grafted to the surface of these two types of nanoparticles. Their presence was shown by different techniques (XPS, IR, UV-vis). The amount of grafted linkers was determined by TGA and in all cases this amount is close to 5 molecules/nm2. First, titanate nanotubes were coated by a macrocyclic chelating agent (0.2 DOTA/nm2). After radiolabelling with indium 111, the TiONts – DOTA[In] nanohybrids were injected in Swiss nude mice and observed by SPECT/CT imaging to characterize their biodistribution. The SPECT/CT images and the radioactivity measured in each organ showed that after one hour, nanotubes are located in lungs and in urine. Then, the nanotubes are gradually eliminated and are only found in urines after 24 hours. The same macrocyclic agent was grafted to the SPIO’s surface for the creation of multimodal probes MRI/SPECT or MRI/PET. Alongside this study, a fluorophore (Zinc phthalocyanine) was also grafted to the surface of nanoparticles. The synthesized SPIO – Pc nanohybrid has the required properties of bimodal imaging MRI/OI probe thanks to his emission wavelength around 670 nm and its relaxivity is about 70 L.mmolFe3O4-1.s-1. Furthermore, nanohybrids were coated by PEG to make them stealth, biocompatible and stable.In this study, the toxicity of most nanohybrids was evaluated by the in vivo zebrafish model. The studied nanohybrids did not present any toxicity, hatching disruption or malformation on zebrafish larvae
Fernandez, Davila Jorge Luis. "Planification cognitive basée sur la logique : de la théorie à l'implémentation". Electronic Thesis or Diss., Toulouse 3, 2022. http://thesesups.ups-tlse.fr/5491/.
Texto completo da fonteIn this thesis, we introduced a cognitive planning framework that can be used to endow artificial agents with the necessary skills to represent and reason about other agents' mental states. Our cognitive planning framework is based on an NP-fragment of an epistemic logic with a semantics exploiting belief bases and whose satisfiability problem can be reduced to SAT. We detail the set of translations for the reduction of our fragment to SAT. In addition, we provide complexity results for checking satisfiability of formulas in our NP-fragment. We define a general architecture for the cognitive planning problem. Afterward, we define two types of planning problem: informative and interrogative, and we find the complexity of finding a solution for the cognitive planning problem in both cases. Furthermore, we illustrated the potential of our framework for applications in human-machine interaction with the help of two examples in which an artificial agent is expected to interact with a human agent through dialogue and to persuade the human to behave in a certain way. Moreover, we introduced a formalization of simple cognitive planning as a quantified boolean formula (QBF) with an optimal number of quantifiers in the prefix. The model for cognitive planning was implemented. We describe how to represent and generate the belief base. Furthermore, we demonstrate how the machine performs the reasoning process to find a sequence of speech acts intended to induce a potential intention in the human agent. The implemented system has three main components: belief revision, cognitive planning, and the translator module. These modules work integrated to capture the human agent's beliefs during the human-machine interaction process and generate a sequence of speech acts to achieve a persuasive goal. Finally, we present an epistemic language to represent the beliefs and actions of an artificial player in the context of the board game Yokai. The cooperative game Yokai requires a combination of theory of mind (ToM), temporal and spatial reasoning for an artificial agent to play effectively. We show that the language properly accounts for these three dimensions and that its satisfiability problem is NP-complete. We implement the game and perform experiments to compare the cooperation level between agents when they try to achieve a common goal by analyzing two scenarios: when the game is played between a human and the artificial agent versus when two humans play the game
Wang, Zhanjun. "Optimisation avancée pour la recherche et la composition des itinéraires comodaux au profit des clients de transport". Thesis, Ecole centrale de Lille, 2015. http://www.theses.fr/2015ECLI0029/document.
Texto completo da fonteNowadays, the environment impact of transport is significant. In an attempt to address these problems, in this work, we are interested in the implementation of a transport information system, which integrates the existing means of transport to respond users' requests, including public transport and the shared transport like carpooling and car-sharing. In this context of application, we elaborate algorithms to provide attractive paths with respect to the imposed constraints, even for simultaneous requests. Different acceleration techniques for path planning are used to reduce the search space for a better performance. The attractive paths are divided into route sections on which the available offers are allocated to different requests, which is treated as one resource allocation problem using metaheuristics algorithms. With consideration of the distributed and dynamic aspects of the problem, the solving strategy makes use of several concepts like multi-agents system and different optimization methods. The proposed methods are tested with realistic scenarios with instances extracted from real world transport networks. The obtained results indicate that our proposed approaches can efficiently solve the itinerary planning problems by providing good and complete solutions
González, Rodríguez Martín. "GADEA: Sistema de Gestión de Interfaces de Usuario Autoadaptables basado en Componentes, Tecnología de Objetos y Agentes". Doctoral thesis, Universidad de Oviedo, 2001. http://hdl.handle.net/10803/11136.
Texto completo da fonte