Dissertations / Theses on the topic 'Natural conversation'

To see the other types of publications on this topic, follow the link: Natural conversation.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Natural conversation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Dalacorte, Maria Cristina Faria. "Natural conversation and efltextbook dialogues : a constrastive study." reponame:Repositório Institucional da UFSC, 1991. https://repositorio.ufsc.br/xmlui/handle/123456789/157696.

Full text
Abstract:
Dissertação (mestrado) - Universidade Federal de Santa Catarina, Centro de Comunicação e Expressão
Made available in DSpace on 2016-01-08T17:10:36Z (GMT). No. of bitstreams: 1 84295.pdf: 1903258 bytes, checksum: 45c467c4c73cf96b76630645a7fc3cda (MD5) Previous issue date: 1991
Examina características estruturais, estratégicas e estilísticas de interações comerciais e conversações por telefone em inglês e português comparadas com diálogos escritos de livros para o ensino de língua inglesa. Estes livros afirmam ensinar inglês através de diálogos reais. Esta análise verifica se as conversações apresentadas nestes livros textos demonstram características semelhantes, às de conversações naturais. Através de uma análise contrastiva detalhada dos dois tipos de diálogos, este estudo prova que as conversações dos livros de diálogos de inglês não são comunicativas mas pseudo- interativas, já que apresentam características da estrutura interna do discurso de sala de aula.
APA, Harvard, Vancouver, ISO, and other styles
2

au, os goh@murdoch edu, and Ong Sing Goh. "A framework and evaluation of conversation agents." Murdoch University, 2008. http://wwwlib.murdoch.edu.au/adt/browse/view/adt-MU20081020.134601.

Full text
Abstract:
This project details the development of a novel and practical framework for the development of conversation agents (CAs), or conversation robots. CAs, are software programs which can be used to provide a natural interface between human and computers. In this study, ‘conversation’ refers to real-time dialogue exchange between human and machine which may range from web chatting to “on-the-go” conversation through mobile devices. In essence, the project proposes a “smart and effective” communication technology where an autonomous agent is able to carry out simulated human conversation via multiple channels. The CA developed in this project is termed “Artificial Intelligence Natural-language Identity” (AINI) and AINI is used to illustrate the implementation and testing carried out in this project. Up to now, most CAs have been developed with a short term objective to serve as tools to convince users that they are talking with real humans as in the case of the Turing Test. The traditional designs have mainly relied on ad-hoc approach and hand-crafted domain knowledge. Such approaches make it difficult for a fully integrated system to be developed and modified for other domain applications and tasks. The proposed framework in this thesis addresses such limitations. Overcoming the weaknesses of previous systems have been the key challenges in this study. The research in this study has provided a better understanding of the system requirements and the development of a systematic approach for the construction of intelligent CAs based on agent architecture using a modular N-tiered approach. This study demonstrates an effective implementation and exploration of the new paradigm of Computer Mediated Conversation (CMC) through CAs. The most significant aspect of the proposed framework is its ability to re-use and encapsulate expertise such as domain knowledge, natural language query and human-computer interface through plug-in components. As a result, the developer does not need to change the framework implementation for different applications. This proposed system provides interoperability among heterogeneous systems and it has the flexibility to be adapted for other languages, interface designs and domain applications. A modular design of knowledge representation facilitates the creation of the CA knowledge bases. This enables easier integration of open-domain and domain-specific knowledge with the ability to provide answers for broader queries. In order to build the knowledge base for the CAs, this study has also proposed a mechanism to gather information from commonsense collaborative knowledge and online web documents. The proposed Automated Knowledge Extraction Agent (AKEA) has been used for the extraction of unstructured knowledge from the Web. On the other hand, it is also realised that it is important to establish the trustworthiness of the sources of information. This thesis introduces a Web Knowledge Trust Model (WKTM) to establish the trustworthiness of the sources. In order to assess the proposed framework, relevant tools and application modules have been developed and an evaluation of their effectiveness has been carried out to validate the performance and accuracy of the system. Both laboratory and public experiments with online users in real-time have been carried out. The results have shown that the proposed system is effective. In addition, it has been demonstrated that the CA could be implemented on the Web, mobile services and Instant Messaging (IM). In the real-time human-machine conversation experiment, it was shown that AINI is able to carry out conversations with human users by providing spontaneous interaction in an unconstrained setting. The study observed that AINI and humans share common properties in linguistic features and paralinguistic cues. These human-computer interactions have been analysed and contributed to the understanding of how the users interact with CAs. Such knowledge is also useful for the development of conversation systems utilising the commonalities found in these interactions. While AINI is found having difficulties in responding to some forms of paralinguistic cues, this could lead to research directions for further work to improve the CA performance in the future.
APA, Harvard, Vancouver, ISO, and other styles
3

Niekrasz, John Joseph. "Toward summarization of communicative activities in spoken conversation." Thesis, University of Edinburgh, 2012. http://hdl.handle.net/1842/6449.

Full text
Abstract:
This thesis is an inquiry into the nature and structure of face-to-face conversation, with a special focus on group meetings in the workplace. I argue that conversations are composed of episodes, each of which corresponds to an identifiable communicative activity such as giving instructions or telling a story. These activities are important because they are part of participants’ commonsense understanding of what happens in a conversation. They appear in natural summaries of conversations such as meeting minutes, and participants talk about them within the conversation itself. Episodic communicative activities therefore represent an essential component of practical, commonsense descriptions of conversations. The thesis objective is to provide a deeper understanding of how such activities may be recognized and differentiated from one another, and to develop a computational method for doing so automatically. The experiments are thus intended as initial steps toward future applications that will require analysis of such activities, such as an automatic minute-taker for workplace meetings, a browser for broadcast news archives, or an automatic decision mapper for planning interactions. My main theoretical contribution is to propose a novel analytical framework called participant relational analysis. The proposal argues that communicative activities are principally indicated through participant-relational features, i.e., expressions of relationships between participants and the dialogue. Participant-relational features, such as subjective language, verbal reference to the participants, and the distribution of speech activity amongst the participants, are therefore argued to be a principal means for analyzing the nature and structure of communicative activities. I then apply the proposed framework to two computational problems: automatic discourse segmentation and automatic discourse segment labeling. The first set of experiments test whether participant-relational features can serve as a basis for automatically segmenting conversations into discourse segments, e.g., activity episodes. Results show that they are effective across different levels of segmentation and different corpora, and indeed sometimes more effective than the commonly-used method of using semantic links between content words, i.e., lexical cohesion. They also show that feature performance is highly dependent on segment type, suggesting that human-annotated “topic segments” are in fact a multi-dimensional, heterogeneous collection of topic and activity-oriented units. Analysis of commonly used evaluation measures, performed in conjunction with the segmentation experiments, reveals that they fail to penalize substantially defective results due to inherent biases in the measures. I therefore preface the experiments with a comprehensive analysis of these biases and a proposal for a novel evaluation measure. A reevaluation of state-of-the-art segmentation algorithms using the novel measure produces substantially different results from previous studies. This raises serious questions about the effectiveness of some state-of-the-art algorithms and helps to identify the most appropriate ones to employ in the subsequent experiments. I also preface the experiments with an investigation of participant reference, an important type of participant-relational feature. I propose an annotation scheme with novel distinctions for vagueness, discourse function, and addressing-based referent inclusion, each of which are assessed for inter-coder reliability. The produced dataset includes annotations of 11,000 occasions of person-referring. The second set of experiments concern the use of participant-relational features to automatically identify labels for discourse segments. In contrast to assigning semantic topic labels, such as topical headlines, the proposed algorithm automatically labels segments according to activity type, e.g., presentation, discussion, and evaluation. The method is unsupervised and does not learn from annotated ground truth labels. Rather, it induces the labels through correlations between discourse segment boundaries and the occurrence of bracketing meta-discourse, i.e., occasions when the participants talk explicitly about what has just occurred or what is about to occur. Results show that bracketing meta-discourse is an effective basis for identifying some labels automatically, but that its use is limited if global correlations to segment features are not employed. This thesis addresses important pre-requisites to the automatic summarization of conversation. What I provide is a novel activity-oriented perspective on how summarization should be approached, and a novel participant-relational approach to conversational analysis. The experimental results show that analysis of participant-relational features is a.
APA, Harvard, Vancouver, ISO, and other styles
4

Goh, Ong Sing. "A framework and evaluation of conversation agents." Thesis, Goh, Ong Sing (2008) A framework and evaluation of conversation agents. PhD thesis, Murdoch University, 2008. https://researchrepository.murdoch.edu.au/id/eprint/752/.

Full text
Abstract:
This project details the development of a novel and practical framework for the development of conversation agents (CAs), or conversation robots. CAs, are software programs which can be used to provide a natural interface between human and computers. In this study, ‘conversation’ refers to real-time dialogue exchange between human and machine which may range from web chatting to “on-the-go” conversation through mobile devices. In essence, the project proposes a “smart and effective” communication technology where an autonomous agent is able to carry out simulated human conversation via multiple channels. The CA developed in this project is termed “Artificial Intelligence Natural-language Identity” (AINI) and AINI is used to illustrate the implementation and testing carried out in this project. Up to now, most CAs have been developed with a short term objective to serve as tools to convince users that they are talking with real humans as in the case of the Turing Test. The traditional designs have mainly relied on ad-hoc approach and hand-crafted domain knowledge. Such approaches make it difficult for a fully integrated system to be developed and modified for other domain applications and tasks. The proposed framework in this thesis addresses such limitations. Overcoming the weaknesses of previous systems have been the key challenges in this study. The research in this study has provided a better understanding of the system requirements and the development of a systematic approach for the construction of intelligent CAs based on agent architecture using a modular N-tiered approach. This study demonstrates an effective implementation and exploration of the new paradigm of Computer Mediated Conversation (CMC) through CAs. The most significant aspect of the proposed framework is its ability to re-use and encapsulate expertise such as domain knowledge, natural language query and human-computer interface through plug-in components. As a result, the developer does not need to change the framework implementation for different applications. This proposed system provides interoperability among heterogeneous systems and it has the flexibility to be adapted for other languages, interface designs and domain applications. A modular design of knowledge representation facilitates the creation of the CA knowledge bases. This enables easier integration of open-domain and domain-specific knowledge with the ability to provide answers for broader queries. In order to build the knowledge base for the CAs, this study has also proposed a mechanism to gather information from commonsense collaborative knowledge and online web documents. The proposed Automated Knowledge Extraction Agent (AKEA) has been used for the extraction of unstructured knowledge from the Web. On the other hand, it is also realised that it is important to establish the trustworthiness of the sources of information. This thesis introduces a Web Knowledge Trust Model (WKTM) to establish the trustworthiness of the sources. In order to assess the proposed framework, relevant tools and application modules have been developed and an evaluation of their effectiveness has been carried out to validate the performance and accuracy of the system. Both laboratory and public experiments with online users in real-time have been carried out. The results have shown that the proposed system is effective. In addition, it has been demonstrated that the CA could be implemented on the Web, mobile services and Instant Messaging (IM). In the real-time human-machine conversation experiment, it was shown that AINI is able to carry out conversations with human users by providing spontaneous interaction in an unconstrained setting. The study observed that AINI and humans share common properties in linguistic features and paralinguistic cues. These human-computer interactions have been analysed and contributed to the understanding of how the users interact with CAs. Such knowledge is also useful for the development of conversation systems utilising the commonalities found in these interactions. While AINI is found having difficulties in responding to some forms of paralinguistic cues, this could lead to research directions for further work to improve the CA performance in the future.
APA, Harvard, Vancouver, ISO, and other styles
5

Goh, Ong Sing. "A framework and evaluation of conversation agents." Goh, Ong Sing (2008) A framework and evaluation of conversation agents. PhD thesis, Murdoch University, 2008. http://researchrepository.murdoch.edu.au/752/.

Full text
Abstract:
This project details the development of a novel and practical framework for the development of conversation agents (CAs), or conversation robots. CAs, are software programs which can be used to provide a natural interface between human and computers. In this study, ‘conversation’ refers to real-time dialogue exchange between human and machine which may range from web chatting to “on-the-go” conversation through mobile devices. In essence, the project proposes a “smart and effective” communication technology where an autonomous agent is able to carry out simulated human conversation via multiple channels. The CA developed in this project is termed “Artificial Intelligence Natural-language Identity” (AINI) and AINI is used to illustrate the implementation and testing carried out in this project. Up to now, most CAs have been developed with a short term objective to serve as tools to convince users that they are talking with real humans as in the case of the Turing Test. The traditional designs have mainly relied on ad-hoc approach and hand-crafted domain knowledge. Such approaches make it difficult for a fully integrated system to be developed and modified for other domain applications and tasks. The proposed framework in this thesis addresses such limitations. Overcoming the weaknesses of previous systems have been the key challenges in this study. The research in this study has provided a better understanding of the system requirements and the development of a systematic approach for the construction of intelligent CAs based on agent architecture using a modular N-tiered approach. This study demonstrates an effective implementation and exploration of the new paradigm of Computer Mediated Conversation (CMC) through CAs. The most significant aspect of the proposed framework is its ability to re-use and encapsulate expertise such as domain knowledge, natural language query and human-computer interface through plug-in components. As a result, the developer does not need to change the framework implementation for different applications. This proposed system provides interoperability among heterogeneous systems and it has the flexibility to be adapted for other languages, interface designs and domain applications. A modular design of knowledge representation facilitates the creation of the CA knowledge bases. This enables easier integration of open-domain and domain-specific knowledge with the ability to provide answers for broader queries. In order to build the knowledge base for the CAs, this study has also proposed a mechanism to gather information from commonsense collaborative knowledge and online web documents. The proposed Automated Knowledge Extraction Agent (AKEA) has been used for the extraction of unstructured knowledge from the Web. On the other hand, it is also realised that it is important to establish the trustworthiness of the sources of information. This thesis introduces a Web Knowledge Trust Model (WKTM) to establish the trustworthiness of the sources. In order to assess the proposed framework, relevant tools and application modules have been developed and an evaluation of their effectiveness has been carried out to validate the performance and accuracy of the system. Both laboratory and public experiments with online users in real-time have been carried out. The results have shown that the proposed system is effective. In addition, it has been demonstrated that the CA could be implemented on the Web, mobile services and Instant Messaging (IM). In the real-time human-machine conversation experiment, it was shown that AINI is able to carry out conversations with human users by providing spontaneous interaction in an unconstrained setting. The study observed that AINI and humans share common properties in linguistic features and paralinguistic cues. These human-computer interactions have been analysed and contributed to the understanding of how the users interact with CAs. Such knowledge is also useful for the development of conversation systems utilising the commonalities found in these interactions. While AINI is found having difficulties in responding to some forms of paralinguistic cues, this could lead to research directions for further work to improve the CA performance in the future.
APA, Harvard, Vancouver, ISO, and other styles
6

Comuni, Federica. "A natural language processing solution to probable Alzheimer’s disease detection in conversation transcripts." Thesis, Högskolan Kristianstad, Fakulteten för naturvetenskap, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:hkr:diva-19889.

Full text
Abstract:
This study proposes an accuracy comparison of two of the best performing machine learning algorithms in natural language processing, the Bayesian Network and the Long Short-Term Memory (LSTM) Recurrent Neural Network, in detecting Alzheimer’s disease symptoms in conversation transcripts. Because of the current global rise of life expectancy, the number of seniors affected by Alzheimer’s disease worldwide is increasing each year. Early detection is important to ensure that affected seniors take measures to relieve symptoms when possible or prepare plans before further cognitive decline occurs. Literature shows that natural language processing can be a valid tool for early diagnosis of the disease. This study found that mild dementia and possible Alzheimer’s can be detected in conversation transcripts with promising results, and that the LSTM is particularly accurate in said detection, reaching an accuracy of 86.5% on the chosen dataset. The Bayesian Network classified with an accuracy of 72.1%. The study confirms the effectiveness of a natural language processing approach to detecting Alzheimer’s disease.
APA, Harvard, Vancouver, ISO, and other styles
7

Fuscone, Simone. "A data intensive approach for characterizing speech interpersonal dynamics in natural conversations." Electronic Thesis or Diss., Aix-Marseille, 2020. http://www.theses.fr/2020AIXM0444.

Full text
Abstract:
Lors d’une conversation, les participants ont la tendance à accorder, consciemment ou non, leur production communicative par rapport à leur interlocuteur. Il est généralement admis que dans des circonstances normales, ce phénomène entraîne une convergence des paramètres de parole des deux participants. Il est généralement connu que dans des circonstances normales, ce phénomène génère une convergence des paramètres du discours des deux participants. Alors que ces études impliquent souvent des conditions de laboratoire contrôlées, les mécanismes qui régissent le phénomène dans les conversations naturelles sont moins connus, en raison du flux spontané des conversants et de la grande variabilité des paramètres suivis. En outre, on ne sait pas encore très bien comment les participants modifient leur style de parole (c'est à dire la dynamique) au cours de la conversation et quels sont les facteurs qui influencent ces modifications. Cette thèse présente une nouvelle méthodologie pour aborder ces aspects
During a conversation, participants tend to tune, consciously or not, their communicative production in regards to their interlocutor. It is generally admitted, that under standard circumstances, these phenomena result in convergence of the two participants’ speech parameters. Past literature offers a large part of studies describing the effects of convergence in interpersonal dynamics but there are still some unclear aspects. These concerns firstly the mechanisms that rule the phenomenon in natural conversations. These are hard to be studied due to the spontaneous flow of the conversants that results to be noisy and variable. In second place in this kind of conversation is still not well known how participants modify their speech style (the dynamics i.e.) in the course of the conversation. In this thesis, we aim to validate previous results in acoustic-prosodic convergence and provide novel approaches to have a partial a posteriori filter on natural conversations and to track the interpersonal dynamics. We used classical machine learning approaches (Linear mixed models, Random forest e.g.) and more recent algorithms of deep learning (LSTM architecture). These results extend the landscape of convergence effects in the not controlled dataset and offer novel approaches, concerning the method to control the variability of natural conversations and the prediction task paradigm to evaluate the interpersonal dynamics, consisting in evaluating the influence of the speaker and interlocutor on each other speech style
APA, Harvard, Vancouver, ISO, and other styles
8

Cuenca, Montesino José María. "L’application WhatsApp dans la négociation franco-espagnole : un catalyseur de la confiance interculturelle." Thesis, Paris 10, 2017. http://www.theses.fr/2017PA100128/document.

Full text
Abstract:
La convergence entre les technologies de l’information et de la communication et l’informatique a pour conséquence l’atténuation des frontières spatio-temporelles. La révolution induite par la téléphonie mobile trouve son paroxysme dans le smartphone, une génération de téléphones intelligents qui intègrent des fonctions propres à l’informatique, dont les applications. Grâce à celles-ci, la communication mobiquitaire devient un fait quotidien. L’application gratuite de messagerie instantanée WhatsApp apparait en 2009, et connaît un succès fulgurant en Espagne. Son utilisation provoque l’émergence d’un nouveau genre discursif : la « conversation ouatsap ». Ce travail de recherche prend appui sur une étude de cas à partir de laquelle seront analysées les manifestations linguistiques et pragmatiques de la confiance interpersonnelle et interculturelle lors des conversations ouatsap menées dans le contexte professionnel du secteur viticole. Nous analyserons les manifestations de celles-ci sur un corpus de conversations ouatsap authentiques menées entre une entrepreneure en France et trois de ses partenaires commerciaux espagnols
The convergence between information and communication technologies and informatics technologies has, as a consequence, a weakening of spatial-temporal boundaries. The ensuing revolution brought about by mobile phones finds its paroxysm with smartphones, a generation of intelligent telephones which integrates functions of informatic technologies such as the applications. Thanks to them mobiquitous communication has become an everyday reality. The free application of instant messages WhatsApp first appeared in 2009, and had a dazzling effect in Spain. It’s use led to the appearance of a new discursive gender: the ouatsap conversation. A case study in which the linguistic and pragmatic characteristics of interpersonal and intercultural trust during ouatsap conversations in a professional context in the winemaking industry underlies this research. We will analyse their characteristics in a corpus of genuine ouatsap conversations that took place between a French company and three of its Spanish commercial partners
APA, Harvard, Vancouver, ISO, and other styles
9

Liu, Yulan. "Distant speech recognition of natural spontaneous multi-party conversations." Thesis, University of Sheffield, 2017. http://etheses.whiterose.ac.uk/17691/.

Full text
Abstract:
Distant speech recognition (DSR) has gained wide interest recently. While deep networks keep improving ASR overall, the performance gap remains between using close-talking recordings and distant recordings. Therefore the work in this thesis aims at providing some insights for further improvement of DSR performance. The investigation starts with collecting the first multi-microphone and multi-media corpus of natural spontaneous multi-party conversations in native English with the speaker location tracked, i.e. the Sheffield Wargame Corpus (SWC). The state-of-the-art recognition systems with the acoustic models trained standalone and adapted both show word error rates (WERs) above 40% on headset recordings and above 70% on distant recordings. A comparison between SWC and AMI corpus suggests a few unique properties in the real natural spontaneous conversations, e.g. the very short utterances and the emotional speech. Further experimental analysis based on simulated data and real data quantifies the impact of such influence factors on DSR performance, and illustrates the complex interaction among multiple factors which makes the treatment of each influence factor much more difficult. The reverberation factor is studied further. It is shown that the reverberation effect on speech features could be accurately modelled with a temporal convolution in the complex spectrogram domain. Based on that a polynomial reverberation score is proposed to measure the distortion level of short utterances. Compared to existing reverberation metrics like C50, it avoids a rigid early-late-reverberation partition without compromising the performance on ranking the reverberation level of recording environments and channels. Furthermore, the existing reverberation measurement is signal independent thus unable to accurately estimate the reverberation distortion level in short recordings. Inspired by the phonetic analysis on the reverberation distortion via self-masking and overlap-masking, a novel partition of reverberation distortion into the intra-phone smearing and the inter-phone smearing is proposed, so that the reverberation distortion level is first estimated on each part and then combined.
APA, Harvard, Vancouver, ISO, and other styles
10

Campbell, Robert. "Understanding and disrupting institutional settings : using networks of conversations to re-imagine future farming lives." Thesis, Edith Cowan University, Research Online, Perth, Western Australia, 2013. https://ro.ecu.edu.au/theses/603.

Full text
Abstract:
Farmers in Australia and elsewhere face the challenge of remaining profitable whilst dealing with adverse structural arrangements and public expectations to better manage environmental degradation. This thesis draws on arguments that dominant paradigms in agricultural science and environmental management have often been ineffective in addressing these apparently competing demands and appear poorly suited to ‘messy’ situations characterized by uncertainty and complexity, and in which diverse stakeholders are motivated by varying goals and values. Engaging with such situations requires a philosophy and methodology that accepts a multiplicity of perspectives and which seeks to learn about and reflect upon novel ways of thinking and acting. Among the underlying ideas that have shaped this project is the importance of recognising the assumptions and commitments that researchers bring to their practice in order that traditions are not uncritically reproduced and that the products of our thinking are not reified. Regarding farming as less a set of technical practices and more as a human activity taking place within broader economic, social, cultural and ecological contexts, I sought to engage a group of farmers in southern Western Australia in a process of taking action to address an issue of common concern that would help them to live and farm well in their district. My role as both researcher and facilitator of conversations was driven by a commitment to dialogue as a process of meaning making and relationship building. Together we explored some of the broader contexts within which the narrower conceptions of economic and ecological problems are often uncritically placed. Taking concrete action together however proved beyond the scope of my research. The challenge of feeding ourselves while better caring for the land and each other will require imaginative as well as technical resources. To this end I have also sought to sketch out some of the creative possibilities contained within the health metaphor as it is applied to soil, arguing that its use as a proxy for quality or condition fails to utilize its disruptive potential.
APA, Harvard, Vancouver, ISO, and other styles
11

Guichard, Jonathan. "Quality Assessment of Conversational Agents : Assessing the Robustness of Conversational Agents to Errors and Lexical Variability." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-226552.

Full text
Abstract:
Assessing a conversational agent’s understanding capabilities is critical, as poor user interactions could seal the agent’s fate at the very beginning of its lifecycle with users abandoning the system. In this thesis we explore the use of paraphrases as a testing tool for conversational agents. Paraphrases, which are different ways of expressing the same intent, are generated based on known working input by performing lexical substitutions and by introducing multiple spelling divergences. As the expected outcome for this newly generated data is known, we can use it to assess the agent’s robustness to language variation and detect potential understanding weaknesses. As demonstrated by a case study, we obtain encouraging results as it appears that this approach can help anticipate potential understanding shortcomings, and that these shortcomings can be addressed by the generated paraphrases.
Att bedöma en konversationsagents språkförståelse är kritiskt, eftersom dåliga användarinteraktioner kan avgöra om agenten blir en framgång eller ett misslyckande redan i början av livscykeln. I denna rapport undersöker vi användningen av parafraser som ett testverktyg för dessa konversationsagenter. Parafraser, vilka är olika sätt att uttrycka samma avsikt, skapas baserat på känd indata genom att utföra lexiska substitutioner och genom att introducera flera stavningsavvikelser. Eftersom det förväntade resultatet för denna indata är känd kan vi använda resultaten för att bedöma agentens robusthet mot språkvariation och upptäcka potentiella förståelssvagheter. Som framgår av en fallstudie får vi uppmuntrande resultat, eftersom detta tillvägagångssätt verkar kunna bidra till att förutse eventuella brister i förståelsen, och dessa brister kan hanteras av de genererade parafraserna.
APA, Harvard, Vancouver, ISO, and other styles
12

Wilkens, Rodrigo Souza. "A study of the use of natural language processing for conversational agents." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2016. http://hdl.handle.net/10183/142158.

Full text
Abstract:
linguagem é uma marca da humanidade e da consciência, sendo a conversação (ou diálogo) uma das maneiras de comunicacão mais fundamentais que aprendemos quando crianças. Por isso uma forma de fazer um computador mais atrativo para interação com usuários é usando linguagem natural. Dos sistemas com algum grau de capacidade de linguagem desenvolvidos, o chatterbot Eliza é, provavelmente, o primeiro sistema com foco em diálogo. Com o objetivo de tornar a interação mais interessante e útil para o usuário há outras aplicações alem de chatterbots, como agentes conversacionais. Estes agentes geralmente possuem, em algum grau, propriedades como: corpo (com estados cognitivos, incluindo crenças, desejos e intenções ou objetivos); incorporação interativa no mundo real ou virtual (incluindo percepções de eventos, comunicação, habilidade de manipular o mundo e comunicar com outros agentes); e comportamento similar ao humano (incluindo habilidades afetivas). Este tipo de agente tem sido chamado de diversos nomes como agentes animados ou agentes conversacionais incorporados. Um sistema de diálogo possui seis componentes básicos. (1) O componente de reconhecimento de fala que é responsável por traduzir a fala do usuário em texto. (2) O componente de entendimento de linguagem natural que produz uma representação semântica adequada para diálogos, normalmente utilizando gramáticas e ontologias. (3) O gerenciador de tarefa que escolhe os conceitos a serem expressos ao usuário. (4) O componente de geração de linguagem natural que define como expressar estes conceitos em palavras. (5) O gerenciador de diálogo controla a estrutura do diálogo. (6) O sintetizador de voz é responsável por traduzir a resposta do agente em fala. No entanto, não há consenso sobre os recursos necessários para desenvolver agentes conversacionais e a dificuldade envolvida nisso (especialmente em línguas com poucos recursos disponíveis). Este trabalho foca na influência dos componentes de linguagem natural (entendimento e gerência de diálogo) e analisa em especial o uso de sistemas de análise sintática (parser) como parte do desenvolvimento de agentes conversacionais com habilidades de linguagem mais flexível. Este trabalho analisa quais os recursos do analisador sintático contribuem para agentes conversacionais e aborda como os desenvolver, tendo como língua alvo o português (uma língua com poucos recursos disponíveis). Para isto, analisamos as abordagens de entendimento de linguagem natural e identificamos as abordagens de análise sintática que oferecem um bom desempenho. Baseados nesta análise, desenvolvemos um protótipo para avaliar o impacto do uso de analisador sintático em um agente conversacional.
Language is a mark of humanity and conscience, with the conversation (or dialogue) as one of the most fundamental manners of communication that we learn as children. Therefore one way to make a computer more attractive for interaction with users is through the use of natural language. Among the systems with some degree of language capabilities developed, the Eliza chatterbot is probably the first with a focus on dialogue. In order to make the interaction more interesting and useful to the user there are other approaches besides chatterbots, like conversational agents. These agents generally have, to some degree, properties like: a body (with cognitive states, including beliefs, desires and intentions or objectives); an interactive incorporation in the real or virtual world (including perception of events, communication, ability to manipulate the world and communicate with others); and behavior similar to a human (including affective abilities). This type of agents has been called by several terms, including animated agents or embedded conversational agents (ECA). A dialogue system has six basic components. (1) The speech recognition component is responsible for translating the user’s speech into text. (2) The Natural Language Understanding component produces a semantic representation suitable for dialogues, usually using grammars and ontologies. (3) The Task Manager chooses the concepts to be expressed to the user. (4) The Natural Language Generation component defines how to express these concepts in words. (5) The dialog manager controls the structure of the dialogue. (6) The synthesizer is responsible for translating the agents answer into speech. However, there is no consensus about the necessary resources for developing conversational agents and the difficulties involved (especially in resource-poor languages). This work focuses on the influence of natural language components (dialogue understander and manager) and analyses, in particular the use of parsing systems as part of developing conversational agents with more flexible language capabilities. This work analyses what kind of parsing resources contributes to conversational agents and discusses how to develop them targeting Portuguese, which is a resource-poor language. To do so we analyze approaches to the understanding of natural language, and identify parsing approaches that offer good performance, based on which we develop a prototype to evaluate the impact of using a parser in a conversational agent.
APA, Harvard, Vancouver, ISO, and other styles
13

Ray, Arijit. "The Art of Deep Connection - Towards Natural and Pragmatic Conversational Agent Interactions." Thesis, Virginia Tech, 2017. http://hdl.handle.net/10919/78335.

Full text
Abstract:
As research in Artificial Intelligence (AI) advances, it is crucial to focus on having seamless communication between humans and machines in order to effectively accomplish tasks. Smooth human-machine communication requires the machine to be sensible and human-like while interacting with humans, while simultaneously being capable of extracting the maximum information it needs to accomplish the desired task. Since a lot of the tasks required to be solved by machines today involve the understanding of images, training machines to have human-like and effective image-grounded conversations with humans is one important step towards achieving this goal. Although we now have agents that can answer questions asked for images, they are prone to failure from confusing input, and cannot ask clarification questions, in turn, to extract the desired information from humans. Hence, as a first step, we direct our efforts towards making Visual Question Answering agents human-like by making them resilient to confusing inputs that otherwise do not confuse humans. Not only is it crucial for a machine to answer questions reasonably, it should also know how to ask questions sequentially to extract the desired information it needs from a human. Hence, we introduce a novel game called the Visual 20 Questions Game, where a machine tries to figure out a secret image a human has picked by having a natural language conversation with the human. Using deep learning techniques like recurrent neural networks and sequence-to-sequence learning, we demonstrate scalable and reasonable performances on both the tasks.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
14

Elvir, Miguel. "EPISODIC MEMORY MODEL FOR EMBODIED CONVERSATIONAL AGENTS." Master's thesis, University of Central Florida, 2010. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/3000.

Full text
Abstract:
Embodied Conversational Agents (ECA) form part of a range of virtual characters whose intended purpose include engaging in natural conversations with human users. While works in literature are ripe with descriptions of attempts at producing viable ECA architectures, few authors have addressed the role of episodic memory models in conversational agents. This form of memory, which provides a sense of autobiographic record-keeping in humans, has only recently been peripherally integrated into dialog management tools for ECAs. In our work, we propose to take a closer look at the shared characteristics of episodic memory models in recent examples from the field. Additionally, we propose several enhancements to these existing models through a unified episodic memory model for ECAÂ s. As part of our research into episodic memory models, we present a process for determining the prevalent contexts in the conversations obtained from the aforementioned interactions. The process presented demonstrates the use of statistical and machine learning services, as well as Natural Language Processing techniques to extract relevant snippets from conversations. Finally, mechanisms to store, retrieve, and recall episodes from previous conversations are discussed. A primary contribution of this research is in the context of contemporary memory models for conversational agents and cognitive architectures. To the best of our knowledge, this is the first attempt at providing a comparative summary of existing works. As implementations of ECAs become more complex and encompass more realistic conversation engines, we expect that episodic memory models will continue to evolve and further enhance the naturalness of conversations.
M.S.Cp.E.
School of Electrical Engineering and Computer Science
Engineering and Computer Science
Computer Engineering MSCpE
APA, Harvard, Vancouver, ISO, and other styles
15

Panesar, Kulvinder. "Conversational artificial intelligence - demystifying statistical vs linguistic NLP solutions." Universitat Politécnica de Valéncia, 2020. http://hdl.handle.net/10454/18121.

Full text
Abstract:
yes
This paper aims to demystify the hype and attention on chatbots and its association with conversational artificial intelligence. Both are slowly emerging as a real presence in our lives from the impressive technological developments in machine learning, deep learning and natural language understanding solutions. However, what is under the hood, and how far and to what extent can chatbots/conversational artificial intelligence solutions work – is our question. Natural language is the most easily understood knowledge representation for people, but certainly not the best for computers because of its inherent ambiguous, complex and dynamic nature. We will critique the knowledge representation of heavy statistical chatbot solutions against linguistics alternatives. In order to react intelligently to the user, natural language solutions must critically consider other factors such as context, memory, intelligent understanding, previous experience, and personalized knowledge of the user. We will delve into the spectrum of conversational interfaces and focus on a strong artificial intelligence concept. This is explored via a text based conversational software agents with a deep strategic role to hold a conversation and enable the mechanisms need to plan, and to decide what to do next, and manage the dialogue to achieve a goal. To demonstrate this, a deep linguistically aware and knowledge aware text based conversational agent (LING-CSA) presents a proof-of-concept of a non-statistical conversational AI solution.
APA, Harvard, Vancouver, ISO, and other styles
16

ZUCCA, MARIO. "The economics of Conversational Agents." Doctoral thesis, Università degli studi di Genova, 2020. http://hdl.handle.net/11567/1006351.

Full text
Abstract:
For many years software has been developed to automate the internal processes of a company. Over the years we have learned to measure the cost / benefit ratio linked to the introduction of new automation tools. We have also figured out how to define a strategy that allows introduction into the company by minimizing risks and maximizing returns. This implies the launch of initiatives aimed at preparing the organization for the changes that will be induced through the creation of new features. However, in the last period we are observing a constant growth in the demand for new automation models that can make the most of recent scientific advances made in the field of cognitive services. In recent years we have witnessed the introduction of new automation services for the processes borrowed from the experiences made in the field of robotics and adapted to more complex business models. With the advent of increasingly sophisticated cognitive services we can easily imagine how, by combining RPA tools with cognitive services, we can significantly extend the services that companies can offer. In fact, we can think of automating certain business processes by integrating cognitive services, allowing us to add a new dimension of intervention, allowing the system to interact using natural language. We can briefly define these new tools as conversational agents. There are already several concrete implementations of these technologies that are abandoning the field of research to try their hand at the industrial world. We are in fact witnessing a progressive spread of conversational platforms also in the consumer sphere. Some of the most concrete examples that have had an important commercial success are Google Home®, Alexa®, Siri®, Cortana® etc. Such a significant success that the concept of virtual assistant is cleared through. The main companies have begun the introduction of these technologies through the implementation of simple chatbots which, however, are changing the main way of interacting with customers through the progressive replacement of traditional call centers. However, many decision makers remain doubting about the claims surrounding the promised business value of the conversational platforms. Potential adopters must be informed about the true value and economic returns of the technological investments needed to introduce these new tools into organizations. In this work we want to define precisely what the conversational agents are and provide a solid economic model that allows to estimate what are the factors that influence the adoption of this technology and the economic returns of the investments.
APA, Harvard, Vancouver, ISO, and other styles
17

Sidås, Albin, and Simon Sandberg. "Conversational Engine for Transportation Systems." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-176810.

Full text
Abstract:
Today's communication between operators and professional drivers takes place through direct conversations between the parties. This thesis project explores the possibility to support the operators in classifying the topic of incoming communications and which entities are affected through the use of named entity recognition and topic classifications. By developing a synthetic training dataset, a NER model and a topic classification model was developed and evaluated to achieve F1-scores of 71.4 and 61.8 respectively. These results were explained by a low variance in the synthetic dataset in comparison to a transcribed dataset from the real world which included anomalies not represented in the synthetic dataset. The aforementioned models were integrated into the dialogue framework Emora to seamlessly handle the back and forth communication and generating responses.
APA, Harvard, Vancouver, ISO, and other styles
18

Rothwell, Clayton D. "Recurrence Quantification Models of Human Conversational Grounding Processes: Informing Natural Language Human-Computer Interaction." Wright State University / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=wright1527591081613424.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Wärnestål, Pontus. "Dialogue Behavior Management in Conversational Recommender Systems." Doctoral thesis, Linköpings universitet, NLPLAB - Laboratoriet för databehandling av naturligt språk, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-9624.

Full text
Abstract:
This thesis examines recommendation dialogue, in the context of dialogue strategy design for conversational recommender systems. The purpose of a recommender system is to produce personalized recommendations of potentially useful items from a large space of possible options. In a conversational recommender system, this task is approached by utilizing natural language recommendation dialogue for detecting user preferences, as well as for providing recommendations. The fundamental idea of a conversational recommender system is that it relies on dialogue sessions to detect, continuously update, and utilize the user's preferences in order to predict potential interest in domain items modeled in a system. Designing the dialogue strategy management is thus one of the most important tasks for such systems. Based on empirical studies as well as design and implementation of conversational recommender systems, a behavior-based dialogue model called bcorn is presented. bcorn is based on three constructs, which are presented in the thesis. It utilizes a user preference modeling framework (preflets) that supports and utilizes natural language dialogue, and allows for descriptive, comparative, and superlative preference statements, in various situations. Another component of bcorn is its message-passing formalism, pcql, which is a notation used when describing preferential and factual statements and requests. bcorn is designed to be a generic recommendation dialogue strategy with conventional, information-providing, and recommendation capabilities, that each describes a natural chunk of a recommender agent's dialogue strategy, modeled in dialogue behavior diagrams that are run in parallel to give rise to coherent, flexible, and effective dialogue in conversational recommender systems. Three empirical studies have been carried out in order to explore the problem space of recommendation dialogue, and to verify the solutions put forward in this work. Study I is a corpus study in the domain of movie recommendations. The result of the study is a characterization of recommendation dialogue, and forms a base for a first prototype implementation of a human-computer recommendation dialogue control strategy. Study II is an end-user evaluation of the acorn system that implements the dialogue control strategy and results in a verification of the effectiveness and usability of the dialogue strategy. There are also implications that influence the refinement of the model that are used in the bcorn dialogue strategy model. Study III is an overhearer evaluation of a functional conversational recommender system called CoreSong, which implements the bcorn model. The result of the study is indicative of the soundness of the behavior-based approach to conversational recommender system design, as well as the informativeness, naturalness, and coherence of the individual bcorn dialogue behaviors.
I denna avhandling undersöks rekommendationsdialog med avseende på utformningen av dialogstrategier f¨or konverserande rekommendationssystem. Syftet med ett rekommendationssystem är att generera personaliserade rekommendationer utifrån potentiellt användbara domänobjekt i stora informationsrymder. I ett konverserande rekommendationssystem angrips detta problem genom att utnyttja naturligt språkk och dialog för att modellera användarpreferenser, liksom för att ge rekommendationer. Grundidén med konverserande rekommendationssystem är att utnyttja dialogsessioner för att upptäcka, uppdatera och utnyttja en användares preferenser för att förutsäga användarens intresse för domänobjekten som modelleras i ett system. Utformningen av dialogstrategihantering är därför en av de viktigaste uppgifterna för sådana system. Baserat på empiriska studier, liksom på utformning och implementering av konverserande rekommendationssystem, presenteras en beteendebaserad dialogmodell som kallas bcorn. bcorns bas utgörs av tre konstruktioner, vilka alla presenteras i denna avhandling. bcorn utnyttjar ett preferensmodelleringsramverk (preflets) som stöder och anv¨ander sig av naturligt språk i dialog och tillåter deskriptiva, komparativa och superlativa preferensuttryck i olika situationer. Den andra komponenten i bcorn är dess interna meddelande-formalism pcql, som är en notation som kan beskriva preferens- och faktiska påståenden och frågor. bcorn är utformat som en generell rekommendationshanteringsstrategi med konventionella, informationsgivande och rekommenderande förmågor, som var och en beskriver naturliga delar av en rekommendationsagents dialogstrategi. Dessa delar modelleras i dialogbeteendediagram som exekveras parallellt för att ge upphov till koherent, flexibel och effektiv dialog i konverserande rekommendationssystem. Tre empiriska studier har utförts för att utforska problemkomplexet som utgör rekommendationsdialog och för att verifiera de lösningar som tagits fram inom ramen för detta arbete. Studie I är en korpusstudie i filmrekommendationsdomänen. Studien resulterar i en karakteristik av rekommendationsdialog, och utgör basen för en första prototyp av dialoghanteringsstrategi för rekommendationsdialog mellan människa och dator. Studie II är en slutanvändarutvärdering av systemet acorn som implementerar denna dialoghanteringsstrategi och resulterar i en verifiering av effektivitet och användbarhet av strategin. Studien resulterar också i implikationer som påverkar utformningen av den modell som används i bcorn. Studie III är en medhörningsutvärdering av det funktionella konverserande rekommendationssystemet CoreSong, som implementerar bcorn-modellen. Resultatet av studien indikerar att det beteendebaserade angreppssättet är funktionellt och att de olika dialogbeteendena i bcorn ger upphov till h¨og informationskvalitet, naturlighet och koherens i rekommendationsdialog.
APA, Harvard, Vancouver, ISO, and other styles
20

Sahay, Saurav. "Socio-semantic conversational information access." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/42855.

Full text
Abstract:
The main contributions of this thesis revolve around development of an integrated conversational recommendation system, combining data and information models with community network and interactions to leverage multi-modal information access. We have developed a real time conversational information access community agent that leverages community knowledge by pushing relevant recommendations to users of the community. The recommendations are delivered in the form of web resources, past conversation and people to connect to. The information agent (cobot, for community/ collaborative bot) monitors the community conversations, and is 'aware' of users' preferences by implicitly capturing their short term and long term knowledge models from conversations. The agent leverages from health and medical domain knowledge to extract concepts, associations and relationships between concepts; formulates queries for semantic search and provides socio-semantic recommendations in the conversation after applying various relevance filters to the candidate results. The agent also takes into account users' verbal intentions in conversations while making recommendation decision. One of the goals of this thesis is to develop an innovative approach to delivering relevant information using a combination of social networking, information aggregation, semantic search and recommendation techniques. The idea is to facilitate timely and relevant social information access by mixing past community specific conversational knowledge and web information access to recommend and connect users with relevant information. Language and interaction creates usable memories, useful for making decisions about what actions to take and what information to retain. Cobot leverages these interactions to maintain users' episodic and long term semantic models. The agent analyzes these memory structures to match and recommend users in conversations by matching with the contextual information need. The social feedback on the recommendations is registered in the system for the algorithms to promote community preferred, contextually relevant resources. The nodes of the semantic memory are frequent concepts extracted from user's interactions. The concepts are connected with associations that develop when concepts co-occur frequently. Over a period of time when the user participates in more interactions, new concepts are added to the semantic memory. Different conversational facets are matched with episodic memories and a spreading activation search on the semantic net is performed for generating the top candidate user recommendations for the conversation. The tying themes in this thesis revolve around informational and social aspects of a unified information access architecture that integrates semantic extraction and indexing with user modeling and recommendations.
APA, Harvard, Vancouver, ISO, and other styles
21

Panesar, Kulvinder. "Functional linguistic based motivations for a conversational software agent." Cambridge Scholars Publishing, 2019. http://hdl.handle.net/10454/18134.

Full text
Abstract:
No
This chapter discusses a linguistically orientated model of a conversational software agent (CSA) (Panesar 2017) framework sensitive to natural language processing (NLP) concepts and the levels of adequacy of a functional linguistic theory (LT). We discuss the relationship between NLP and knowledge representation (KR), and connect this with the goals of a linguistic theory (Van Valin and LaPolla 1997), in particular Role and Reference Grammar (RRG) (Van Valin Jr 2005). We debate the advantages of RRG and consider its fitness and computational adequacy. We present a design of a computational model of the linking algorithm that utilises a speech act construction as a grammatical object (Nolan 2014a, Nolan 2014b) and the sub-model of belief, desire and intentions (BDI) (Rao and Georgeff 1995). This model has been successfully implemented in software, using the resource description framework (RDF), and we highlight some implementation issues that arose at the interface between language and knowledge representation (Panesar 2017).
The full-text of this article will be released for public view at the end of the publisher embargo on 27 Sep 2024.
APA, Harvard, Vancouver, ISO, and other styles
22

Panesar, Kulvinder. "Motivating a linguistically orientated model for a conversational software agent." FungramKB.com, 2018. http://hdl.handle.net/10454/18135.

Full text
Abstract:
Yes
This paper presents a critical evaluation framework for a linguistically orientated conversational software agent (CSA) (Panesar, 2017). The CSA prototype investigates the integration, intersection and interface of the language, knowledge, and speech act constructions (SAC) based on a grammatical object (Nolan, 2014), and the sub-­‐model of belief, desires and intention (BDI) (Rao and Georgeff, 1995) and dialogue management (DM) for natural language processing (NLP). A long-­‐standing issue within NLP CSA systems is refining the accuracy of interpretation to provide realistic dialogue to support the human-­‐to-­‐computer communication. This prototype constitutes three phase models: (1) a linguistic model based on a functional linguistic theory – Role and Reference Grammar (RRG) (Van Valin Jr, 2005); (2) Agent Cognitive Model with two inner models: (a) knowledge representation model employing conceptual graphs serialised to Resource Description Framework (RDF); (b) a planning model underpinned by BDI concepts (Wooldridge, 2013) and intentionality (Searle, 1983) and rational interaction (Cohen and Levesque, 1990); and (3) a dialogue model employing common ground (Stalnaker, 2002). The evaluation approach for this Java-­‐based prototype and its phase models is a multi-­‐approach driven by grammatical testing (English language utterances), software engineering and agent practice. A set of evaluation criteria are grouped per phase model, and the testing framework aims to test the interface, intersection and integration of all phase models and their inner models. This multi-­‐approach encompasses checking performance both at internal processing, stages per model and post-­‐implementation assessments of the goals of RRG, and RRG based specifics tests. The empirical evaluations demonstrate that the CSA is a proof-­‐of-­‐concept, demonstrating RRG’s fitness for purpose for describing, and explaining phenomena, language processing and knowledge, and computational adequacy. Contrastingly, evaluations identify the complexity of lower level computational mappings of NL – agent to ontology with semantic gaps, and further addressed by a lexical bridging consideration (Panesar, 2017).
APA, Harvard, Vancouver, ISO, and other styles
23

Vaudable, Christophe. "Analyse et reconnaissance des émotions lors de conversations de centres d'appels." Phd thesis, Université Paris Sud - Paris XI, 2012. http://tel.archives-ouvertes.fr/tel-00758650.

Full text
Abstract:
La reconnaissance automatique des émotions dans la parole est un sujet de recherche relativement récent dans le domaine du traitement de la parole, puisqu'il est abordé depuis une dizaine d'années environs. Ce sujet fait de nos jours l'objet d'une grande attention, non seulement dans le monde académique mais aussi dans l'industrie, grâce à l'augmentation des performances et de la fiabilité des systèmes. Les premiers travaux étaient fondés sur des donnés jouées par des acteurs, et donc non spontanées. Même aujourd'hui, la plupart des études exploitent des séquences pré-segmentées d'un locuteur unique et non une communication spontanée entre plusieurs locuteurs. Cette méthodologie rend les travaux effectués difficilement généralisables pour des informations collectées de manière naturelle.Les travaux entrepris dans cette thèse se basent sur des conversations de centre d'appels, enregistrés en grande quantité et mettant en jeu au minimum 2 locuteurs humains (un client et un agent commercial) lors de chaque dialogue. Notre but est la détection, via l'expression émotionnelle, de la satisfaction client. Dans une première partie nous présentons les scores pouvant être obtenus sur nos données à partir de modèles se basant uniquement sur des indices acoustiques ou lexicaux. Nous montrons que pour obtenir des résultats satisfaisants une approche ne prenant en compte qu'un seul de ces types d'indices ne suffit pas. Nous proposons pour palier ce problème une étude sur la fusion d'indices de types acoustiques, lexicaux et syntaxico-sémantiques. Nous montrons que l'emploi de cette combinaison d'indices nous permet d'obtenir des gains par rapport aux modèles acoustiques même dans les cas ou nous nous basons sur une approche sans pré-traitements manuels (segmentation automatique des conversations, utilisation de transcriptions fournies par un système de reconnaissance de la parole). Dans une seconde partie nous remarquons que même si les modèles hybrides acoustiques/linguistiques nous permettent d'obtenir des gains intéressants la quantité de données utilisées dans nos modèles de détection est un problème lorsque nous testons nos méthodes sur des données nouvelles et très variées (49h issus de la base de données de conversations). Pour remédier à ce problème nous proposons une méthode d'enrichissement de notre corpus d'apprentissage. Nous sélectionnons ainsi, de manière automatique, de nouvelles données qui seront intégrées dans notre corpus d'apprentissage. Ces ajouts nous permettent de doubler la taille de notre ensemble d'apprentissage et d'obtenir des gains par rapport aux modèles de départ. Enfin, dans une dernière partie nous choisissons d'évaluées nos méthodes non plus sur des portions de dialogues comme cela est le cas dans la plupart des études, mais sur des conversations complètes. Nous utilisons pour cela les modèles issus des études précédentes (modèles issus de la fusion d'indices, des méthodes d'enrichissement automatique) et ajoutons 2 groupes d'indices supplémentaires : i) Des indices " structurels " prenant en compte des informations comme la durée de la conversation, le temps de parole de chaque type de locuteurs. ii) des indices " dialogiques " comprenant des informations comme le thème de la conversation ainsi qu'un nouveau concept que nous nommons " implication affective ". Celui-ci a pour but de modéliser l'impact de la production émotionnelle du locuteur courant sur le ou les autres participants de la conversation. Nous montrons que lorsque nous combinons l'ensemble de ces informations nous arrivons à obtenir des résultats proches de ceux d'un humain lorsqu'il s'agit de déterminer le caractère positif ou négatif d'une conversation
APA, Harvard, Vancouver, ISO, and other styles
24

Bouguelia, Sara. "Modèles de dialogue et reconnaissance d'intentions composites dans les conversations Utilisateur-Chatbot orientées tâches." Electronic Thesis or Diss., Lyon 1, 2023. http://www.theses.fr/2023LYO10106.

Full text
Abstract:
Les Systèmes de Dialogue (ou simplement chatbots) sont très demandés de nos jours. Ils permettent de comprendre les besoins des utilisateurs (ou intentions des utilisateurs), exprimés en langage naturel, et de répondre à ces intentions en invoquant les APIs (Interfaces de Programmation d’Application) appropriées. Les chatbots sont connus pour leur interface facile à utiliser et ils ne nécessitent que l'une des capacités les plus innées des humains qui est l'utilisation du langage naturel. L'amélioration continue de l'Intelligence Artificielle (IA), du Traitement du Langage Naturel (NLP) et du nombre incalculable de dispositifs permettent d'effectuer des tâches réelles (par exemple, faire une réservation) en utilisant des interactions basées sur le langage naturel entre les utilisateurs et un grand nombre de services.Néanmoins, le développement de chatbots est encore à un stade préliminaire, avec plusieurs défis théoriques et techniques non résolus découlant de (i) la variations d'énoncés dans les interactions humain-chatbot en libre échange et (ii) du grand nombre de services logiciels potentiellement inconnus au moment du développement. Les conversations en langage naturel des personnes peuvent être riches, potentiellement ambiguës et exprimer des intentions complexes et dépendantes du contexte. Les techniques traditionnelles de modélisation et d'orchestration de processus et de composition de services sont limitées pour soutenir de telles conversations car elles supposent généralement une attente a priori de quelles informations et applications seront accédées et comment les utilisateurs exploreront ces sources et services. Limiter les conversations à un modèle de processus signifie que nous ne pouvons soutenir qu'une petite fraction de conversations possibles. Bien que les avancées existantes dans les techniques de NLP et d'apprentissage automatique (ML) automatisent diverses tâches telles que la reconnaissance d'intention, la synthèse d'appels API pour prendre en charge une large gamme d'intentions d'utilisateurs potentiellement complexes est encore largement un processus manuel et coûteux.Ce projet de thèse vise à faire avancer la compréhension fondamentale de l'ingénierie des services cognitifs. Dans cette thèse, nous contribuons à des abstractions et des techniques novatrices axées sur la synthèse d'appels API pour soutenir une large gamme d'intentions d'utilisateurs potentiellement complexes. Nous proposons des techniques réutilisables et extensibles pour reconnaître et réaliser des intentions complexes lors des interactions entre humains, chatbots et services. Ces abstractions et techniques visent à débloquer l'intégration transparente et évolutive de conversations basées sur le langage naturel avec des services activés par logiciel
Dialogue Systems (or simply chatbots) are in very high demand these days. They enable the understanding of user needs (or user intents), expressed in natural language, and on fulfilling such intents by invoking the appropriate back-end APIs (Application Programming Interfaces). Chatbots are famed for their easy-to-use interface and gentle learning curve (it only requires one of humans' most innate ability, the use of natural language). The continuous improvement in Artificial Intelligence (AI), Natural Language Processing (NLP), and the countless number of devices allow performing real-world tasks (e.g., making a reservation) by using natural language-based interactions between users and a large number of software enabled services.Nonetheless, chatbot development is still in its preliminary stage, and there are several theoretical and technical challenges that need to be addressed. One of the challenges stems from the wide range of utterance variations in open-end human-chatbot interactions. Additionally, there is a vast space of software services that may be unknown at development time. Natural human conversations can be rich, potentially ambiguous, and express complex and context-dependent intents. Traditional business process and service composition modeling and orchestration techniques are limited to support such conversations because they usually assume a priori expectation of what information and applications will be accessed and how users will explore these sources and services. Limiting conversations to a process model means that we can only support a small fraction of possible conversations. While existing advances in NLP and Machine Learning (ML) techniques automate various tasks such as intent recognition, the synthesis of API calls to support a broad range of potentially complex user intents is still largely a manual, ad-hoc and costly process.This thesis project aims at advancing the fundamental understanding of cognitive services engineering. In this thesis we contribute novel abstractions and techniques focusing on the synthesis of API calls to support a broad range of potentially complex user intents. We propose reusable and extensible techniques to recognize and realize complex intents during humans-chatbots-services interactions. These abstractions and techniques seek to unlock the seamless and scalable integration of natural language-based conversations with software-enabled services
APA, Harvard, Vancouver, ISO, and other styles
25

Venter, Wessel Johannes. "An embodied conversational agent with autistic behaviour." Thesis, Stellenbosch : Stellenbosch University, 2012. http://hdl.handle.net/10019.1/20115.

Full text
Abstract:
Thesis (MSc)--Stellenbosch University, 2012.
ENGLISH ABSTRACT: In this thesis we describe the creation of an embodied conversational agent which exhibits the behavioural traits of a child who has Asperger Syndrome. The agent is rule-based, rather than arti cially intelligent, for which we give justi cation. We then describe the design and implementation of the agent, and pay particular attention to the interaction between emotion, personality and social context. A 3D demonstration program shows the typical output to conform to Asperger-like answers, with corresponding emotional responses.
AFRIKAANSE OPSOMMING: In hierdie tesis beskryf ons die ontwerp en implementasie van 'n gestaltegespreksagent wat die gedrag van 'n kind met Asperger se sindroom uitbeeld. Ons regverdig die besluit dat die agent reël-gebaseerd is, eerder as 'n ware skynintelligensie implementasie. Volgende beskryf ons die wisselwerking tussen emosies, persoonlikheid en sosiale konteks en hoe dit inskakel by die ontwerp en implementasie van die agent. 'n 3D demonstrasieprogram toon tipiese ooreenstemmende Asperger-agtige antwoorde op vrae, met gepaardgaande emosionele reaksies.
APA, Harvard, Vancouver, ISO, and other styles
26

Ku, Jeong Yoon. "Korean honorifics: a case study analysis of Korean Speech levels in naturally occurring conversations." Thesis, Canberra, ACT : The Australian National University, 2014. http://hdl.handle.net/1885/12376.

Full text
Abstract:
The Korean honorific system, one of the significant grammatical systems in Korean, indicates the hierarchical social status of participants and plays an essential role in social interaction. For example, the speech levels are forms of sentence final suffixes attached to verbs and adjectives. They can be grammatically organized according to speakers' relationships. Speakers must choose among these verb endings and/or vocabulary items during every interaction. Therefore, the proper use of speech levels is a key factor in the expression of social identities, speakers' interpersonal feelings, and relationships. However, interpersonal feelings and relationships are hard to explain through actual use of speech levels. There are two aspects of interpersonal relationships between the participants in a conversation that affect the use of honorifics: vertical distance (gender, age) and horizontal distance (the degree of intimacy), and these two aspects of interpersonal relationships show the complexity of the use of speech levels. Because of the complexity of the use of speech levels, many Korean language learners feel that it is difficult to learn Korean speech levels. Several researchers have examined Korean language textbooks and language teaching in terms of Korean honorifics. They have pointed out several problems in current teaching materials and emphasized the importance of pragmatic factors and the necessity of authentic data to fully reflect actual Korean honorific uses. Addressing these issues, the thesis demonstrates the need for teaching materials that introduce how honorific speech levels are used in naturally occurring conversation by showing the complexity of how one speaker can use and switch among speech levels depending on the interlocutors or situations in the conversational interaction.
APA, Harvard, Vancouver, ISO, and other styles
27

FORLEO, GIANROBERTO. "Digital AgriFood – Conversazioni online e Big Data per lo sviluppo della comunicazione strategica e progettuale del sistema produttivo marchigiano. Abstract." Doctoral thesis, Urbino, 2023. https://hdl.handle.net/11576/2710331.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Panesar, Kulvinder. "An Evaluation of a Linguistically Motivated Conversational Software Agent Framework." UNIVERSITAT POLITÈCNICA DE VALÈNCIA, 2019. http://hdl.handle.net/10454/18122.

Full text
Abstract:
yes
This paper presents a critical evaluation framework for a linguistically motivated conversational software agent (CSA). The CSA prototype investigates the integration, intersection and interface of the language, knowledge, and speech act constructions (SAC) based on a grammatical object, and the sub-model of belief, desires and intention (BDI) and dialogue management (DM) for natural language processing (NLP). A long-standing issue within NLP CSA systems is refining the accuracy of interpretation to provide realistic dialogue to support human-to-computer communication. This prototype constitutes three phase models: (1) a linguistic model based on a functional linguistic theory – Role and Reference Grammar (RRG), (2) an Agent Cognitive Model with two inner models: (a) a knowledge representation model, (b) a planning model underpinned by BDI concepts, intentionality and rational interaction, and (3) a dialogue model. The evaluation strategy for this Java-based prototype is multi-approach driven by grammatical testing (English language utterances), software engineering and agent practice. A set of evaluation criteria are grouped per phase model, and the testing framework aims to test the interface, intersection and integration of all phase models. The empirical evaluations demonstrate that the CSA is a proof-of-concept, demonstrating RRG’s fitness for purpose for describing, and explaining phenomena, language processing and knowledge, and computational adequacy. Contrastingly, evaluations identify the complexity of lower level computational mappings of NL – agent to ontology with semantic gaps, and further addressed by a lexical bridging solution.
APA, Harvard, Vancouver, ISO, and other styles
29

Erbacher, Pierre. "Proactive models for open-domain conversational search." Electronic Thesis or Diss., Sorbonne université, 2024. http://www.theses.fr/2024SORUS009.

Full text
Abstract:
Les systèmes conversationnels deviennent de plus en plus des passerelles importantes vers l'information dans un large éventail de domaines d'application tels que le service client, la santé, l'éducation, le travail de bureau, les achats en ligne et la recherche sur le Web. Même si les modèles linguistiques existants sont capables de suivre de longues conversations, de répondre à des questions et de résumer des documents avec une fluidité impressionnante, ils ne peuvent pas être considérés comme de véritables systèmes de recherche conversationnelle. Au-delà de fournir des réponses en langage naturel, une capacité clé des systèmes de recherche conversationnelle est leur participation (pro)active à la conversation avec les utilisateurs. Cela permet aux systèmes de recherche conversationnelle de mieux saisir les besoins des utilisateurs, mais également de les guider et de les assister lors des sessions de recherche. En particulier, lorsque les utilisateurs ne peuvent pas parcourir la liste des documents pour en évaluer la pertinence, comme dans les interactions purement vocales, le système doit prendre l'initiative de demander un contexte supplémentaire, de demander une confirmation ou de suggérer plus d'informations pour aider l'utilisateur à naviguer virtuellement et réduire sa charge cognitive. Cependant, en raison du coût élevé de la collecte et de l'annotation de ces données, les ensembles de données conversationnelles disponibles pour l'accès à l'information sont généralement petits, fabriqués à la main et limités à des applications spécifiques à un domaine telles que les recommandations ou la réponse aux questions conversationnelles, qui sont généralement initiées par l'utilisateur. et contiennent des questions simples ou une série de questions contextualisées. De plus, il est particulièrement difficile d'évaluer correctement les systèmes de recherche conversationnelle en raison de la nature des interactions. Dans cette thèse, nous visons à améliorer la recherche conversationnelle en permettant des interactions plus complexes et plus utiles avec les utilisateurs. Nous proposons plusieurs méthodes et approches pour atteindre cet objectif. Premièrement, dans les chapitres 1 et 2, nous étudions comment les simulations d'utilisateurs peuvent être utilisées pour former et évaluer des systèmes qui raffinent les requêtes via des interactions séquentielles avec l'utilisateur. Nous nous concentrons sur l'interaction séquentielle basée sur les clics avec une simulation utilisateur pour clarifier les requêtes.Ensuite, dans les chapitres 3 et 4, nous explorons comment les ensembles de données IR existants peuvent être améliorés avec des interactions simulées pour améliorer les capacités IR dans la recherche conversationnelle et comment les interactions à initiatives mixtes peuvent servir à la récupération de documents et à la désambiguïsation des requêtes. Dans le chapitre 4, nous proposons d'augmenter l'ensemble de données AmbigNQ avec des questions de clarification pour mieux former et évaluer les systèmes afin d'effectuer des tâches de réponse proactive aux questions, où les systèmes sont censés lever l'ambiguïté des questions initiales des utilisateurs avant de répondre. Enfin, dans le dernier chapitre, nous nous sommes concentrés sur l'interaction entre les systèmes et un moteur de recherche externe. Nous avons introduit une nouvelle méthode d'approche pour apprendre à un modèle de langage à évaluer en interne sa capacité à répondre correctement à une requête donnée, sans utiliser autre chose que les données comprises dans son apprentissage
Conversational systems are increasingly becoming important gateways to information in a wide range of application domains such as customer service, health, education, office work, online shopping, and web search.While existing language models are able to follow long conversations, answer questions, and summarize documents with impressive fluency, they cannot be considered as true conversational search systems.Beyond providing natural language answers, a key capability of conversational search systems is their (pro)active participation in the conversation with users. This allows conversational search systems to better capture users' needs but also guide, and assist them during search sessions. In particular, when users cannot browse the list of documents to assess the relevance, as in pure speech interactions, the system needs to take the initiative to ask for additional context, ask for confirmation, or suggest more information to help the user navigate virtually and reduce his cognitive load. Additionally, these models are expected not only to take the initiate in conversation with users but also to proactively interact with a diverse range of other systems or database, including various tools (calendar, calculator ), internet (search engines), and various other APIs (weather, maps, e-commerce, booking.. ). However, due to the high cost of collecting and annotating such data, available conversational datasets for information access are typically small, hand-crafted, and limited to domain-specific applications such as recommendation or conversational question-answering, which are typically user-initiated and contain simple or a series of contextualized questions. In addition, it is particularly challenging to properly evaluate conversational search systems because of the nature of the interactions.In this thesis, we aim to improve conversational search by enabling more complex and useful interactions with users. We propose multiple methods and approaches to achieve this goal.First, in chapter 1 and 2, we investigate how user simulations can be used to train and evaluate systems that perform query refinement through sequential interactions with the user. We focus on sequential click-based interaction with a user simulation for clarifying queries.Then, in chapter 3 and chapter 4, we explore how existing IR datasets can be enhanced with simulated interactions to improve IR capabilities in conversational search and how mixed-initiative interactions can serve document retrieval and query disambiguation. In chapter 4, we propose to augment the AmbigNQ dataset with clarifying questions to better train and evaluate systems to perform pro-active question-answering tasks, where systems are expected to disambiguate the initial user questions before answering. To our knowledge, PAQA is the first dataset providing both questions, answers, supporting documents, and clarifying questions covering multiple types of ambiguity (entity references, event references, properties, time-dependent…) with enough examples for fine-tuning models. Finally, in the last chapter, we focused on the interaction between systems and an external search engine. We introduced a new approach method to teach a language model to internally assess its ability to answer properly a given query, without using anything more than data comprised used for its training. The resulting model can directly identify its ability to answer a given question, with performances comparable -if not superior- to widely accepted hallucination detection baselines such as perplexity-based approaches which are strong exogenous baselines. It allows models to proactively query search API depending on its ability to answer the question
APA, Harvard, Vancouver, ISO, and other styles
30

Leenhardt, Marguerite. "Les conversations des internautes. Approche pragmatique d'acquisition de connaissances à partir de conversations textuelles pour la recherche marketing." Thesis, Sorbonne Paris Cité, 2017. http://www.theses.fr/2017USPCA034.

Full text
Abstract:
Ce travail de recherche s'inscrit dans le cadre des méthodes de la linguistique de corpus et procède des besoins d'exploitation formulés dans le domaine du marketing à l'égard des conversations des internautes. Deux pistes sont poursuivies, la première relevant de leur description du point de vue de l'analyse des conversations et de la textométrie, la seconde visant des applications pratiques relatives à la fouille de textes. Une méthode de description systématique et automatisable est proposée, à partir de laquelle un procédé de mesure de l'engagement conversationnel des participants est mis en œuvre. L'étude des diagrammes d'engagement conversationnel (DEC) produits à partir de cette mesure permet d'observer des régularités typologiques dans les postures manifestées par les participants. Ce travail met également en exergue l'apport de la méthode textométrique pour l'acquisition de connaissances utiles à des fins de catégorisation automatique. Plusieurs analyses textométriques sont utilisées (spécificités, segments répétés, inventaires distributionnels) pour élaborer un modèle de connaissance dédié à la détection des intentions d'achat dans des fils de discussion issus d'un forum automobile. Les résultats obtenus, encourageants malgré la rareté des signaux exploitables au sein du corpus étudié, soulignent l'intérêt d'articuler des techniques d'analyse textométrique et de fouille de données textuelles au sein d'un même procédé d'acquisition de connaissances pour l'analyse automatique des conversations des internautes
This research is part of the methods of corpus linguistics and proceeds from the needs expressed in the field of marketing regarding conversations of internet users. Two lines of research are investigated, the first falling under the perspective of conversation analysis and textometry, the second focuses on practical applications for text mining. A systematic and automated description is provided, from which a method of measuring participants' conversational engagement is implemented. The study of conversational engagement diagrams (CED) produced from this measure allows to observe typological regularities regarding how participants position themselves in conversations. This work also highlights the contribution of the textometric method for acquiring useful knowledge for supervised classification. Several textometric measures are used (specificity, repeated segments, distributional inventories) to develop a knowledge model for the detection of purchase intentions in discussions threads from an automotive forum. The results, encouraging despite the scarcity of usable signals in the corpus, underline the importance of articulating textometric analysis techniques and text mining in the same process of knowledge acquisition for automatic analysis of conversations of internet users
APA, Harvard, Vancouver, ISO, and other styles
31

Cervone, Alessandra. "Computational models of coherence for open-domain dialogue." Doctoral thesis, Università degli studi di Trento, 2020. http://hdl.handle.net/11572/276165.

Full text
Abstract:
Coherence is the quality that gives a text its conceptual unity, making a text a coordinated set of connected parts rather than a random group of sentences (turns, in the case of dialogue). Hence, coherence is an integral property of human communication, necessary for a meaningful discourse both in text and dialogue. As such, coherence can be regarded as a requirement for conversational agents, i.e. machines designed to converse with humans. Though recently there has been a proliferation in the usage and popularity of conversational agents, dialogue coherence is still a relatively neglected area of research, and coherence across multiple turns of a dialogue remains an open challenge for current conversational AI research. As conversational agents progress from being able to handle a single application domain to multiple ones through any domain (open-domain), the range of possible dialogue paths increases, and thus the problem of maintaining multi-turn coherence becomes especially critical. In this thesis, we investigate two aspects of coherence in dialogue and how they can be used to design modules for an open-domain coherent conversational agent. In particular, our approach focuses on modeling intentional and thematic information patterns of distribution as proxies for a coherent discourse in open-domain dialogue. While for modeling intentional information we employ Dialogue Acts (DA) theory (Bunt, 2009); for modeling thematic information we rely on open-domain entities (Barzilay and Lapata, 2008). We find that DAs and entities play a fundamental role in modelling dialogue coherence both independently and jointly, and that they can be used to model different components of an open-domain conversational agent architecture, such as Spoken Language Understanding, Dialogue Management, Natural Language Generation, and open-domain dialogue evaluation. The main contributions of this thesis are: (I) we present an open-domain modular conversational agent architecture based on entity and DA structures designed for coherence and engagement; (II) we propose a methodology for training an open-domain DA tagger compliant with the ISO 24617-2 standard (Bunt et al., 2012) combining multiple resources; (III) we propose different models, and a corpus, for predicting open-domain dialogue coherence using DA and entity information trained with weakly supervised techniques, first at the conversation level and then at the turn level; (IV) we present supervised approaches for automatic evaluation of open-domain conversation exploiting DA and entity information, both at the conversation level and at the turn level; (V) we present experiments with Natural Language Generation models that generate text from Meaning Representation structures composed of DAs and slots for an open-domain setting.
APA, Harvard, Vancouver, ISO, and other styles
32

Cervone, Alessandra. "Computational models of coherence for open-domain dialogue." Doctoral thesis, Università degli studi di Trento, 2020. http://hdl.handle.net/11572/276165.

Full text
Abstract:
Coherence is the quality that gives a text its conceptual unity, making a text a coordinated set of connected parts rather than a random group of sentences (turns, in the case of dialogue). Hence, coherence is an integral property of human communication, necessary for a meaningful discourse both in text and dialogue. As such, coherence can be regarded as a requirement for conversational agents, i.e. machines designed to converse with humans. Though recently there has been a proliferation in the usage and popularity of conversational agents, dialogue coherence is still a relatively neglected area of research, and coherence across multiple turns of a dialogue remains an open challenge for current conversational AI research. As conversational agents progress from being able to handle a single application domain to multiple ones through any domain (open-domain), the range of possible dialogue paths increases, and thus the problem of maintaining multi-turn coherence becomes especially critical. In this thesis, we investigate two aspects of coherence in dialogue and how they can be used to design modules for an open-domain coherent conversational agent. In particular, our approach focuses on modeling intentional and thematic information patterns of distribution as proxies for a coherent discourse in open-domain dialogue. While for modeling intentional information we employ Dialogue Acts (DA) theory (Bunt, 2009); for modeling thematic information we rely on open-domain entities (Barzilay and Lapata, 2008). We find that DAs and entities play a fundamental role in modelling dialogue coherence both independently and jointly, and that they can be used to model different components of an open-domain conversational agent architecture, such as Spoken Language Understanding, Dialogue Management, Natural Language Generation, and open-domain dialogue evaluation. The main contributions of this thesis are: (I) we present an open-domain modular conversational agent architecture based on entity and DA structures designed for coherence and engagement; (II) we propose a methodology for training an open-domain DA tagger compliant with the ISO 24617-2 standard (Bunt et al., 2012) combining multiple resources; (III) we propose different models, and a corpus, for predicting open-domain dialogue coherence using DA and entity information trained with weakly supervised techniques, first at the conversation level and then at the turn level; (IV) we present supervised approaches for automatic evaluation of open-domain conversation exploiting DA and entity information, both at the conversation level and at the turn level; (V) we present experiments with Natural Language Generation models that generate text from Meaning Representation structures composed of DAs and slots for an open-domain setting.
APA, Harvard, Vancouver, ISO, and other styles
33

Salim, Soufian Antoine. "Analyse discursive multi-modale des conversations écrites en ligne portées sur la résolution de problèmes." Thesis, Nantes, 2017. http://www.theses.fr/2017NANT4074/document.

Full text
Abstract:
Nous nous intéressons aux conversations écrites en ligne orientées vers la résolution de problèmes. Dans la littérature, les interactions entre humains sont typiquement modélisées en termes d’actes de dialogue, qui désignent les types de fonctions remplies par les énoncés dans un discours. Nous cherchons à utiliser ces actes pour analyser les conversations écrites en ligne. Un cadre et des méthodes bien définies permettant une analyse fine de ce type de conversations en termes d’actes de dialogue représenteraient un socle solide sur lequel pourraient reposer différents systèmes liés à l’aide à la résolution des problèmes et à l’analyse des conversations écrites en ligne. De tels systèmes représentent non seulement un enjeu important pour l’industrie, mais permettraient également d’améliorer les plate-formes d’échanges collaboratives qui sont quotidiennement sollicitées par des millions d’utilisateurs. Cependant, les techniques d’identification de la structure des conversations n’ont pas été développées autour des conversations écrites en ligne. Il est nécessaire d’adapter les ressources existantes pour ces conversations. Cet obstacle est à placer dans le cadre de la recherche en communication médiée par les réseaux (CMR), et nous confronte à ses problématiques propres. Notre objectif est de modéliser les conversations écrites en ligne orientées vers la résolution de problèmes en termes d’actes de dialogue, et de proposer des outils pour la reconnaissance automatique de ces actes
We are interested in problem-solving online written conversations. These conversations may be found on online channels such as forums, mailing lists or chat rooms. In the literature, human interactions are usually modelled in terms of dialogue acts. Dialogue acts are typically used to represent the discursive functions of utterances in dialogue. We want to use dialogue acts for the analysis of online written conversations. Well-defined methods and models allowing for the fine-grained analysis of these conversations would represent a solid framework to support different user-assistance and dialogue analysis systems. This would represent an important stake for the customer support industry, but could also be used to improve collaborative assistance platforms that are accessed daily by millions of users. However, current conversations analysis techniques were not developed with written online conversations in mind. It is necessary to adapt existing resources for these conversations. This effort is related to the field of research in computer-mediated conversations (CMC). Our goal is to build a dialogue act model for problem-solving online written conversations, and to offer tools for the automatic recognition of these acts
APA, Harvard, Vancouver, ISO, and other styles
34

Lilja, Adam, and Max Kihlborg. "Important criteria when choosing a conversational AI platform for enterprises." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-280896.

Full text
Abstract:
This paper evaluates and analyzes three conversational AI-platforms; Dialogflow (Google), Watson Assistant (IBM) and Teneo (Artificial Solutions) on how they perform based on a set of criteria; pricing model, ease-of-use, efficiency, experience working in the software and what results to expect from each platform. The main focus was to investigate the platforms in order to acquire an understanding of which platform would best be suited for enterprises. The platforms were compared by performing a variety of tasks aiming to answer these questions. The technical research was combined with an analysis of each company’s pricing model and strategy to get an understanding of how they target their products on the market. This study concludes that different softwares may be suitable for different settings depending on the size of an enterprise and the demand for complex solutions. Overall, Teneo outperformed its competitors in these tests and seems to be the most scalable solution with the ability to create both simple and complicated solutions. It was more demanding to get started in comparison with the other platforms, but became more efficient as time progressed. Some findings include that Dialogflow and Watson Assistant lacked capabilities when faced with  complex and complicated tasks. From a pricing strategy point of view, the companies are similar in their approach but Artificial Solutions and IBM has more flexible methods while Google has a fixed pricing strategy. Combining the pricing strategy and technical analysis this implicates that Teneo would be a better choice for larger enterprises while Watson Assistant and Dialogflow may be more suitable for smaller ones.
Det här arbetet evaluerar och analyserar tre konversationella AI-plattformar; Dialogflow (Google), Watson Assistant (IBM) och Teneo (Artificial Solutions) utifrån hur de presterar baserat på ett antal  kriterier; prismodell, enkel användning, effektivitet, upplevelse att arbeta i programvaran och vilka resultat man förväntar sig från varje plattform. Huvudsakligt fokus var att undersöka plattformarna för att få en uppfattning om vilken plattform som skulle passa bäst för företag. Plattformarna jämfördes genom att utföra en mängd olika uppgifter som syftade till att besvara dessa frågor. Den tekniska forskningen kombinerades med en analys av varje företags prismodell och prisstrategi för att få en uppfattning av hur de riktar sina produkter på marknaden. Denna studie drar slutsatsen att olika programvaror kan vara lämpliga för olika sammanhang beroende på ett företags storlek och dess efterfrågan på komplexa lösningar. Sammantaget överträffade Teneo sina konkurrenter i dessa tester och verkar vara den mest skalbara lösningen med förmågan att skapa både enkla och komplicerade lösningar. Det var mer krävande att komma igång i jämförelse med de andra plattformarna, men det blev mer effektivt med tiden. Vissa fynd inkluderar att Dialogflow och Watson Assistant saknade kapacitet när de mötte komplexa och komplicerade uppgifter. Från en prissättningsstrategisk synvinkel är företagen liknande i sin metod men Artificial Solutions och IBM har mer flexibla metoder medan Google har en fast prissättningstrategi. Genom att kombinera prisstrategi och teknisk analys innebär detta att Teneo skulle vara ett bättre val för större företag medan Watson Assistant och Dialogflow kan vara mer lämpade för mindre.
APA, Harvard, Vancouver, ISO, and other styles
35

Parcollet, Titouan. "Quaternion neural networks A survey of quaternion neural networks - Chapter 2 Real to H-space Autoencoders for Theme Identification in Telephone Conversations - Chapter 7." Thesis, Avignon, 2019. http://www.theses.fr/2019AVIG0233.

Full text
Abstract:
Au cours des dernières années, l’apprentissage profond est devenu l’approche privilégiée pour le développement d’une intelligence artificielle moderne (IA). L’augmentation importante de la puissance de calcul, ainsi que la quantité sans cesse croissante de données disponibles ont fait des réseaux de neurones profonds la solution la plus performante pour la resolution de problèmes complexes. Cependant, la capacité à parfaitement représenter la multidimensionalité des données réelles reste un défi majeur pour les architectures neuronales artificielles.Pour résoudre ce problème, les réseaux de neurones basés sur les algèbres des nombres complexes et hypercomplexes ont été développés. En particulier, les réseaux de neurones de quaternions (QNN) ont été proposés pour traiter les données tridi- mensionnelles et quadridimensionnelles, sur la base des quaternions représentant des rotations dans notre espace tridimensionnel. Malheureusement, et contrairement aux réseaux de neurones à valeurs complexes qui sont de nos jours acceptés comme une alternative aux réseaux de neurones réels, les QNNs souffrent de nombreuses lacunes qui sont en partie comblées par les différents travaux détaillés par ce manuscrit.Ainsi, la thèse se compose de trois parties qui introduisent progressivement les concepts manquants, afin de faire des QNNs une alternative aux réseaux neuronaux à valeurs réelles. La premiere partie présente et répertorie les précédentes découvertes relatives aux quaternions et aux réseaux de neurones de quaternions, afin de définir une base pour la construction des QNNs modernes.La deuxième partie introduit des réseaux neuronaux de quaternions état de l’art, afin de permettre une comparaison dans des contextes identiques avec les architectures modernes traditionnelles. Plus précisément, les QNNs étaient majoritairement limités par leurs architectures trop simples, souvent composées d’une seule couche cachée comportant peu de neurones. Premièrement, les paradigmes fondamentaux, tels que les autoencodeurs et les réseaux de neurones profonds sont présentés. Ensuite, les très répandus et étudiés réseaux de neurones convolutionnels et récurrents sont étendus à l’espace des quaternions. De nombreuses experiences sur différentes applications réelles, telles que la vision par ordinateur, la compréhension du langage parlé ainsi que la reconnaissance automatique de la parole sont menées pour comparer les modèles de quaternions introduits aux réseaux neuronaux conventionnels. Dans ces contextes bien spécifiques, les QNNs ont obtenus de meilleures performances ainsi qu’une réduction importante du nombre de paramètres neuronaux nécessaires à la phase d’apprentissage.Les QNNs sont ensuite étendus à des conditions d’entrainement permettant de traiter toutes les représentations en entrée des modèles de quaternions. Dans un scénario traditionnel impliquant des QNNs, les caractéristiques d’entrée sont manuellement segmentées en quatre composants, afin de correspondre à la representation induite par les quaternions. Malheureusement, il est difficile d’assurer qu’une telle segmentation est optimale pour résoudre le problème considéré. De plus, une segmentation manuelle réduit fondamentalement l’application des QNNs à des tâches naturellement définies dans un espace à au plus quatre dimensions. De ce fait, la troisième partie de cette thèse introduit un modèle supervisé et un modèle non supervisé permettant l’extraction de caractéristiques d’entrée désentrelacées et significatives dans l’espace des quaternions, à partir de n’importe quel type de signal réel uni-dimentionnel, permettant l’utilisation des QNNs indépendamment de la dimensionnalité des vecteurs d’entrée et de la tâche considérée. Les expériences menées sur la reconnaissance de la parole et la classification de documents parlés montrent que les approches proposées sont plus performantes que les représentations traditionnelles de quaternions
In the recent years, deep learning has become the leading approach to modern artificial intelligence (AI). The important improvement in terms of processing time required for learning AI based models alongside with the growing amount of available data made of deep neural networks (DNN) the strongest solution to solve complex real-world problems. However, a major challenge of artificial neural architectures lies on better considering the high-dimensionality of the data.To alleviate this issue, neural networks (NN) based on complex and hypercomplex algebras have been developped. The natural multidimensionality of the data is elegantly embedded within complex and hypercomplex neurons composing the model. In particular, quaternion neural networks (QNN) have been proposed to deal with up to four dimensional features, based on the quaternion representation of rotations and orientations. Unfortunately, and conversely to complex-valued neural networks that are nowadays known as a strong alternative to real-valued neural networks, QNNs suffer from numerous limitations that are carrefuly addressed in the different parts detailled in this thesis.The thesis consists in three parts that gradually introduce the missing concepts of QNNs, to make them a strong alternative to real-valued NNs. The first part introduces and list previous findings on quaternion numbers and quaternion neural networks to define the context and strong basics for building elaborated QNNs.The second part introduces state-of-the-art quaternion neural networks for a fair comparison with real-valued neural architectures. More precisely, QNNs were limited by their simple architectures that were mostly composed of a single and shallow hidden layer. In this part, we propose to bridge the gap between quaternion and real-valued models by presenting different quaternion architectures. First, basic paradigms such as autoencoders and deep fully-connected neural networks are introduced. Then, more elaborated convolutional and recurrent neural networks are extended to the quaternion domain. Experiments to compare QNNs over equivalents NNs have been conducted on real-world tasks across various domains, including computer vision, spoken language understanding and speech recognition. QNNs increase performances while reducing the needed number of neural parameters compared to real-valued neural networks.Then, QNNs are extended to unconventional settings. In a conventional QNN scenario, input features are manually segmented into three or four components, enabling further quaternion processing. Unfortunately, there is no evidence that such manual segmentation is the representation that suits the most to solve the considered task. Morevover, a manual segmentation drastically reduces the field of application of QNNs to four dimensional use-cases. Therefore the third part introduces a supervised and an unsupervised model to extract meaningful and disantengled quaternion input features, from any real-valued input signal, enabling the use of QNNs regardless of the dimensionality of the considered task. Conducted experiments on speech recognition and document classification show that the proposed approaches outperform traditional quaternion features
APA, Harvard, Vancouver, ISO, and other styles
36

Brock, Walter A. "Alternative Approaches to Correction of Malapropisms in AIML Based Conversational Agents." NSUWorks, 2014. http://nsuworks.nova.edu/gscis_etd/20.

Full text
Abstract:
The use of Conversational Agents (CAs) utilizing Artificial Intelligence Markup Language (AIML) has been studied in a number of disciplines. Previous research has shown a great deal of promise. It has also documented significant limitations in the abilities of these CAs. Many of these limitations are related specifically to the method employed by AIML to resolve ambiguities in the meaning and context of words. While methods exist to detect and correct common errors in spelling and grammar of sentences and queries submitted by a user, one class of input error that is particularly difficult to detect and correct is the malapropism. In this research a malapropism is defined a "verbal blunder in which one word is replaced by another similar in sound but different in meaning" ("malapropism," 2013). This research explored the use of alternative methods of correcting malapropisms in sentences input to AIML CAs using measures of Semantic Distance and tri-gram probabilities. Results of these alternate methods were compared against AIML CAs using only the Symbolic Reductions built into AIML. This research found that the use of the two methodologies studied here did indeed lead to a small, but measurable improvement in the performance of the CA in terms of the appropriateness of its responses as classified by human judges. However, it was also noted that in a large number of cases, the CA simply ignored the existence of a malapropism altogether in formulating its responses. In most of these cases, the interpretation and response to the user's input was of such a general nature that one might question the overall efficacy of the AIML engine. The answer to this question is a matter for further study.
APA, Harvard, Vancouver, ISO, and other styles
37

Tadonfouet, Tadjou Lionel. "Constitution de fils de discussion cohérents à partir de conversations issues d’outils professionnels de communication et de collaboration." Electronic Thesis or Diss., Sorbonne université, 2023. https://accesdistant.sorbonne-universite.fr/login?url=https://theses-intra.sorbonne-universite.fr/2023SORUS380.pdf.

Full text
Abstract:
Constituer des fils de conversations cohérents à partir de conversations issues d’outils professionnels de communication et de collaboration est un processus de transformation d’une conversation écrite et asynchrone en sous-conversations. Chacune de ces sous-conversations traitant d’un sujet spécifique tout en gardant l’ordre d’arrivée des messages émis par les interlocuteurs dans la conversation originale. Ces sous-conversations donnent ainsi lieu à des structures de conversations linéaires ou arborescentes. Ce processus peut s’appliquer sur les discussions de forum mais aussi sur des conversations d’emails, ces deux exemples étant plus généralement des représentants de Contenus Médiés Par Ordinateur (CMO). Pour constituer ces sous-fils de conversations d’emails, il est nécessaire de s’appuyer sur les métadonnées de ceux-ci et leurs contenus. Néanmoins, ces éléments ne nous semblent pas suffisants en pratique. En effet, une conversation par email est en fait un dialogue avec une structure discursive potentiellement utile pour suivre l’évolution de la discussion. Il faut cependant noter que ce dialogue est asynchrone, ce qui introduit des spécificités. Dans les dialogues synchrones, il ressort très souvent des relations très fortes entre des énoncés consécutifs qui dans un long échange peuvent ainsi constituer des clusters de sous-conversations. Pour constituer des sous-fils de conversations à partir de conversations originales d’emails, nous nous appuyons sur ce type de relations entre les phrases d’emails successifs : ces relations sont dites transverses. Contrairement aux dialogues où ces relations peuvent facilement être identifiées, ceci est une tâche très complexe pour ce qui est des conversations d’emails et constitue la principale sous-problématique nommée appariement d’énoncés pour laquelle nous proposons des approches de résolution. Les conversations regorgent généralement beaucoup d’informations linguistiques et paralinguistiques, les actes de dialogue en font partie, ils aident très souvent à mieux cerner le contenu d’un échange et pourrait fortement contribuer à constituer des sous-fils de conversations via une meilleure identification des relations entre des énoncés. Ceci est l’hypothèse que nous posons dans le cadre de la résolution du problème d’appariement d’énoncés, s’appuyant sur une première phase de classification d’énoncés de dialogues. Dans le manuscrit, nous présentons les travaux connexes à notre problématique de base, ainsi que les sous-problématiques mentionnées ci-dessus. Autour de cet axe de travail principal, nous abordons divers aspects connexes mais importants, nécessaires ou utiles. Ainsi, nous abordons de façon approfondie ce que sont les CMO, l’analyse discursive et son historicité ainsi que les corpus disponibles pour approcher de tels problèmes. Ensuite nous proposons différentes approches de résolution de nos sous-problématiques avec des expériences bien détaillées et des évaluations de nos approches. Enfin, notre manuscrit se clôture sur des propositions telles que : l’application des approches proposées à d’autres types de CMO comme les forums et d’autres pistes à explorer pour résoudre la problématique de constitution de sous-fils de conversation
Constituting coherent threads of conversation from professional communication and collaboration tools is a process of transforming a written, asynchronous conversation into sub-conversations, each dealing with a specific topic while maintining the order of arrival of the messages sent by interlocutors in the original conversation. These sub-conversations thus result in linear or tree-like conversation structures. This process can be applied to forum discussions but also to e-mail conversations, both examples being more generally representative of Computer Mediated Content (CMC). To build up these sub-threads of e-mail conversations, we need to rely on their metadata and content. In practice, however, these elements do not seem sufficient. An e-mail conversation is, in fact, a dialogue with a discursive structure that is potentially useful for tracking the evolution of the discussion. It should be noted, however, that this dialogue is asynchronous, which emphasizes specificities. In synchronous dialogues, very strong relationships often emerge between consecutive utterances, which in a long discussion can form clusters of sub-conversations. The constitution of conversation sub-threads from main conversations is based in this type of relationships between the sentences of successive emails in a conversation : this type of relationship is refered to as transverse. Unlike dialogues, where such relations can easily be identified, this is a very complex task in email conversations and constitutes the main sub-problem called statement matching for which we suggest several resolution methods. Conversations generally abound in linguistic and paralinguistic information, among which are dialogue acts. They very often help to better identify the content of a dialogue and could strongly contribute to constituting conversation sub-threads via a better identification of relations between utterances. This is the hypothesis we state in the context of solving the statement matching problem, based on an initial phase of classification of dialogue statements. This manuscript decribes the work related to our core problem, as well as the sub-problems mentioned above. Around this main focus, we address various related but important, necessary or useful aspects. Thus, we take an in-depth look at CMOs, discourse analysis and its historicity, as well as the available corpus to approach such problems. Then we offer different resolution methods for our sub-problems, with well-detailed experiments and evaluations of said methods. Finally, our manuscript concludes with the following propositions : the application of the proposed methods to other types of CMO, such as forums, and other possibilities to be explored to solve the problem of constituting conversational sub-threads
APA, Harvard, Vancouver, ISO, and other styles
38

Panesar, Kulvinder. "Natural language processing (NLP) in Artificial Intelligence (AI): a functional linguistic perspective." Vernon Press, 2020. http://hdl.handle.net/10454/18140.

Full text
Abstract:
Yes
This chapter encapsulates the multi-disciplinary nature that facilitates NLP in AI and reports on a linguistically orientated conversational software agent (CSA) (Panesar 2017) framework sensitive to natural language processing (NLP), language in the agent environment. We present a novel computational approach of using the functional linguistic theory of Role and Reference Grammar (RRG) as the linguistic engine. Viewing language as action, utterances change the state of the world, and hence speakers and hearer’s mental state change as a result of these utterances. The plan-based method of discourse management (DM) using the BDI model architecture is deployed, to support a greater complexity of conversation. This CSA investigates the integration, intersection and interface of the language, knowledge, speech act constructions (SAC) as a grammatical object, and the sub-model of BDI and DM for NLP. We present an investigation into the intersection and interface between our linguistic and knowledge (belief base) models for both dialogue management and planning. The architecture has three-phase models: (1) a linguistic model based on RRG; (2) Agent Cognitive Model (ACM) with (a) knowledge representation model employing conceptual graphs (CGs) serialised to Resource Description Framework (RDF); (b) a planning model underpinned by BDI concepts and intentionality and rational interaction; and (3) a dialogue model employing common ground. Use of RRG as a linguistic engine for the CSA was successful. We identify the complexity of the semantic gap of internal representations with details of a conceptual bridging solution.
APA, Harvard, Vancouver, ISO, and other styles
39

Lee, John Ray. "Conversations with an intelligent agent-- modeling and integrating patterns in communications among humans and agents." Diss., University of Iowa, 2006. http://ir.uiowa.edu/etd/61.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Pinard-Prévost, Geneviève. "Enjeux de la transcription du matériel paraverbal dans les corpus de langue orale en contexte naturel." Mémoire, Université de Sherbrooke, 2011. http://hdl.handle.net/11143/5641.

Full text
Abstract:
Un des principaux obstacles à l'analyse complète des conversations naturelles demeure la présence de plusieurs chevauchements de parole. Lorsque trois locuteurs ou plus s'y trouvent impliqués, des conversations parallèles accroissent encore le défi. En effet, les signaux de parole mixtes qui en résultent ne peuvent être soumis à une analyse acoustique informatisée. La décision de n'étudier que les extraits conversationnels en signal de parole pur, pour permettre des analyses appuyées par l'informatique, nous prive, selon nos résultats, de la moitié des manifestations prosodiques réalisées en contexte naturel (hauteur et intensité de la voix, allongement des syllabes et pauses), puisqu'elles foisonnent dans les chevauchements de parole. Pour la conduite de recherches lexico-sémantiques et interactionnelles, nous préférons ne pas sacrifier l'aspect naturel des conversations. C'est pourquoi nous nous en remettons à une analyse complètement humaine, quoiqu'elle puisse être appuyée par la technologie dans la mesure où celle-ci ne prend pas le pas sur le jugement du transcripteur (par exemple pour mesurer des pauses ou des syllabes allongées). Au terme de ce mémoire, nous proposons un mode de transcription perceptuel amélioré par rapport à ce qui se fait déjà, susceptible de favoriser le marquage pertinent de la prosodie pour des analyses lexico-sémantiques et interactionnelles, ainsi qu'une haute fidélité aux données primaires. Par le fait même, aussi bien les extraits de conversation où un seul locuteur s'exprime que les moments de chevauchement ou de conversations parallèles peuvent être pris en considération par les chercheurs. Nous proposons également certaines innovations graphiques pour augmenter la lisibilité des transcriptions qui prévoient un tel marquage précis des proéminences prosodiques.
APA, Harvard, Vancouver, ISO, and other styles
41

Baldinato, José Otavio. "Conhecendo a química: um estudo sobre obras de divulgação do início do século XIX." Universidade de São Paulo, 2016. http://www.teses.usp.br/teses/disponiveis/81/81132/tde-21032016-113015/.

Full text
Abstract:
Livros de divulgação nos permitem olhar de modo peculiar para a ciência de um período, e a percepção de padrões nessas obras pode dar indícios de como certa área de pesquisa era apresentada ao público não especializado. Nesta pesquisa investigamos textos introdutórios à química publicados na Inglaterra durante a primeira metade do século XIX, que para muitos autores representa o período de maior popularidade já experimentado por esta ciência. Sob a luz da contemporânea historiografia da ciência, valorizamos o acesso a registros originais e a reconstrução de contextos, buscando critérios contemporâneos que nos permitam analisar: Qual contexto motivava a produção e o consumo desses livros de divulgação? Quais obras tiveram maior relevância no período? Qual era a visão da química comunicada pela divulgação? No âmbito do ensino, buscamos viabilizar material historiográfico que explicite o caráter dinâmico da química, além dos seus vínculos com questões sociais, econômicas, políticas e religiosas, pontuando reflexões sobre aspectos da natureza da ciência com foco na formação de professores. Nossos resultados revelam um amplo contexto de valorização das ciências naturais como ferramentas do progresso social. Dentre as obras de destaque, resenhas e críticas dos periódicos locais apontam para The Chemical Catechism, de Samuel Parkes, e Conversations on Chemistry, de Jane Marcet. Ambas foram publicadas originalmente no ano de 1806 e receberam várias reedições e traduções, sendo também adaptadas e plagiadas por outros autores. Embora apresentem estilos bem diferentes, esses textos sugerem uma visão comum da química, tratada como uma ciência: de caráter utilitário e que se aplica diretamente na resolução de problemas de interesse econômico e social; que fundamenta a construção do seu entendimento sobre a matéria nos processos de síntese e decomposição; que desperta o interesse comum pelo forte apelo sensorial dos seus experimentos; e que desvela a sabedoria divina escondida nas leis que regem os fenômenos naturais. Esta última característica revela o convívio entre os discursos da ciência e da religião nos textos de divulgação do período. Esta tese busca um diálogo com a formação de professores de química na atualidade, pontuando como um olhar histórico sobre a ciência pode propiciar reflexões de interesse no âmbito do ensino.
Popularization books provide a particular way of accessing science within specific historical contexts by allowing one to glimpse how a certain field of knowledge was addressed to the lay public. The present research focus on early nineteenth-century introductory books on chemistry published in England as objects of study. For many authors, chemistry experienced its greatest popularity period at that time. Methodological framework was based on current historiography of science, taking into account a careful consideration of the historical context and the search for primary sources. Research questions included: What context motivated the production and the consumption of popular chemistry books? Which amongst these books achieved the greatest relevance? What was the image of chemistry communicated by popularization initiatives? Seeking a contribution for science teaching, this thesis provides historiographical material that makes explicit the dynamic character of chemistry as a science that deals with social, economic, political and religious issues. Such influences are highlighted in order to encourage reflections on aspects of the nature of science with a focus on teachers training. Results reveal a broader context connecting the development of natural philosophy with social progress. Contemporary periodical reviews point to the books entitled The Chemical Catechism, by Samuel Parkes, and Conversations on Chemistry, by Jane Marcet, among the most successful in their genre. Both were first published in 1806 with several further editions and reprints, also being translated into several languages and even plagiarized by other authors. Despite their very different styles, both texts suggest a common image of chemistry, which included: a practical appeal by its direct application in solving problems of economic and social interest; the processes of synthesis and decomposition as means for understanding matter in general; a strong sensory appeal provided by experiments; and the capacity to unveil divine wisdom hidden in the laws governing natural phenomena. This last feature reveals the interaction between the discourses of science and religion in popularization texts of the period. This thesis also proposes a dialogue with current training of chemistry teachers, by suggesting how a historical look at science may give rise to useful reflections for chemistry educators.
APA, Harvard, Vancouver, ISO, and other styles
42

Sennett, Evan James. "Sky Water: The Intentional Eye and the Intertextual Conversation between Henry David Thoreau and Harlan Hubbard." University of Toledo Honors Theses / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=uthonors1544635048555133.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Bondanini, Andrea. "Chatbot ed Elaborazione Naturale del Linguaggio. Progettazione e realizzazione di un assistente sanitario." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2019.

Find full text
Abstract:
La tesi concerne lo studio e sviluppo di un agente conversazionale che trova applicazione in ambito sanitario, nell'ambito di un progetto in cooperazione con un'azienda del territorio (Onit srl). In particolare si vuole facilitare l'accesso alle prestazioni mediche erogate dal Servizio Sanitario Nazionale, permettendo ad paziente di prenotare i propri appuntamenti chattando in modo naturale tramite una applicazione di messaggistica. Il sistema predispone anche un servizio di FAQ, che l'utente può utilizzare per richiedere informazioni su una tipologia di prestazione medica.
APA, Harvard, Vancouver, ISO, and other styles
44

Kerr, Tamsin, and na. "Conversations with the bunyip : the idea of the wild in imagining, planning, and celebrating place through metaphor, memoir, mythology, and memory." Griffith University. Griffith School of Environment, 2007. http://www4.gu.edu.au:8080/adt-root/public/adt-QGU20070814.160841.

Full text
Abstract:
What lies beneath Our cultured constructions? The wild lies beneath. The mud and the mad, the bunyip Other, lies beneath. It echoes through our layered metaphors We hear its memories Through animal mythology in wilder places Through emotive imagination of landscape memoir Through mythic archaeologies of object art. Not the Nation, but the land has active influence. In festivals of bioregion, communities re-member its voice. Our creativity goes to what lies beneath. This thesis explores the ways we develop deeper and wilder connections to specific regional and local landscapes using art, festival, mythology and memoir. It argues that we inhabit and understand the specific nature of our locale when we plan space for the non-human and creatively celebrate culture-nature coalitions. A wilder and more active sense of place relies upon community cultural conversations with the mythic, represented in the Australian exemplar of the bunyip. The bunyip acts as a metaphor for the subaltern or hidden culture of a place. The bunyip is land incarnate. No matter how pristine the wilderness or how concrete the urban, every region has its localised bunyip-equivalent that defines, and is shaped by, its community and their environmental relationships. Human/non-human cohabitations might be actively expressed through art and cultural experience to form a wilder, more emotive landscape memoir. This thesis discusses a diverse range of landstories, mythologies, environmental art, and bioregional festivities from around Australasia with a special focus on the Sunshine Coast or Gubbi-Gubbi region. It suggests a subaltern indigenous influence in how we imagine, plan and celebrate place. The cultural discourses of metaphor, memoir, mythology and memory shape land into landscapes. When the metaphor is wild, the memoir celebratory, the mythology animal, the memory creative and complex, our ways of being are ecocentric and grounded. The distinctions between nature and culture become less defined; we become native to country. Our multi-cultured histories are written upon the earth; our community identities shape and are shaped by the land. Together, monsters and festivals remind us of the active land.
APA, Harvard, Vancouver, ISO, and other styles
45

Kerr, Tamsin. "Conversations with the bunyip: the idea of the wild in imagining, planning, and celebrating place through metaphor, memoir, mythology, and memory." Thesis, Griffith University, 2007. http://hdl.handle.net/10072/365495.

Full text
Abstract:
What lies beneath Our cultured constructions? The wild lies beneath. The mud and the mad, the bunyip Other, lies beneath. It echoes through our layered metaphors We hear its memories Through animal mythology in wilder places Through emotive imagination of landscape memoir Through mythic archaeologies of object art. Not the Nation, but the land has active influence. In festivals of bioregion, communities re-member its voice. Our creativity goes to what lies beneath. This thesis explores the ways we develop deeper and wilder connections to specific regional and local landscapes using art, festival, mythology and memoir. It argues that we inhabit and understand the specific nature of our locale when we plan space for the non-human and creatively celebrate culture-nature coalitions. A wilder and more active sense of place relies upon community cultural conversations with the mythic, represented in the Australian exemplar of the bunyip. The bunyip acts as a metaphor for the subaltern or hidden culture of a place. The bunyip is land incarnate. No matter how pristine the wilderness or how concrete the urban, every region has its localised bunyip-equivalent that defines, and is shaped by, its community and their environmental relationships. Human/non-human cohabitations might be actively expressed through art and cultural experience to form a wilder, more emotive landscape memoir. This thesis discusses a diverse range of landstories, mythologies, environmental art, and bioregional festivities from around Australasia with a special focus on the Sunshine Coast or Gubbi-Gubbi region. It suggests a subaltern indigenous influence in how we imagine, plan and celebrate place. The cultural discourses of metaphor, memoir, mythology and memory shape land into landscapes. When the metaphor is wild, the memoir celebratory, the mythology animal, the memory creative and complex, our ways of being are ecocentric and grounded. The distinctions between nature and culture become less defined; we become native to country. Our multi-cultured histories are written upon the earth; our community identities shape and are shaped by the land. Together, monsters and festivals remind us of the active land.
Thesis (PhD Doctorate)
Doctor of Philosophy (PhD)
Griffith School of Environment
Faculty of Science, Environment, Engineering and Technology
Full Text
APA, Harvard, Vancouver, ISO, and other styles
46

Schröder, Marc [Verfasser], and Hans [Akademischer Betreuer] Uszkoreit. "The SEMAINE API : a component integration framework for a naturally interacting and emotionally competent embodied conversational agent / Marc Schröder. Betreuer: Hans Uszkoreit." Saarbrücken : Saarländische Universitäts- und Landesbibliothek, 2012. http://d-nb.info/1051586518/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

He, Yun. "Politeness in contemporary Chinese : a postmodernist analysis of generational variation in the use of compliments and compliment responses." Thesis, Loughborough University, 2012. https://dspace.lboro.ac.uk/2134/9460.

Full text
Abstract:
There is some evidence from scholarship that politeness norms in China are diversified. I maintain that a study aiming to provide systematic evidence of this would require an approach to politeness phenomena that is able to address such diversity. Drawing upon the insights of recent scholarship on the distinction between the modernist and postmodernist approaches to politeness, I survey relevant literature. I show that many current works on politeness argue that the modernist approach (Lakoff 1973/1975, Brown and Levinson 1987[1978], Leech 1983) generally tends to assume that society is relatively homogeneous with regard to politeness norms. By contrast, I demonstrate that the postmodernist approach to politeness (e.g. Eelen 2001, Mills 2003, Watts 2003) foregrounds the heterogeneity of society and the rich variability of politeness norms within a given culture. I argue that, by using a postmodernist approach to politeness, it is possible to show evidence of differences between groups of the Chinese in their politeness behaviour and the informing norms of politeness. I then explore this issue in depth by focusing on compliments and compliment responses (CRs). I show that studies on these speech acts in Chinese have to date tended to adopt a modernist approach to politeness and often assume a compliment and a CR to be easily identifiable. Moreover, I show that they do not address the heterogeneity of Chinese society and generally assume interactants to be homogeneous in terms of politeness norms that inform compliment and CR behaviours. On this basis, I raise the questions as to whether, by adopting a postmodernist rather than modernist approach, there is empirical evidence that politeness norms informing compliments and CRs vary among the Chinese, and whether these norms correlate with generation. v To this end, by audio-recording both spontaneous naturally occurring conversations and follow-up interviews, I construct a corpus of compliments and CRs generated by two generations of the Chinese brought up before and after the launch of China's reform. Quantitative and qualitative analyses of these data show that there is variation in compliment and CR behaviours in Chinese and the informing politeness norms. Furthermore, the result shows that this variation is correlated with generation. I then show how, by using a research methodology which emphasizes the interactants' perceptions obtained through follow-up interviews, my study brings to light problems with previous studies on compliments and CRs which hitherto are not addressed. By showing evidence that compliments and CRs are not as easy to identify as many previous researchers have indicated. I argue that my emic approach to data analysis provides a useful perspective on the complexity of intention in studies on speech acts and perhaps beyond. My study, therefore, makes an interesting contribution to the debate over this notion central to politeness research. Moreover, I argue my methodology which is able to categorize and analyze data according to participants' self-reported perceptions allows me to draw out differences in the two generations' compliment and CR behaviours and the informing politeness norms.
APA, Harvard, Vancouver, ISO, and other styles
48

Desai, Krutarth. "California State University, San Bernardino Chatbot." CSUSB ScholarWorks, 2018. https://scholarworks.lib.csusb.edu/etd/775.

Full text
Abstract:
Now-a-days the chatbot development has been moving from the field of Artificial-Intelligence labs to the desktops and mobile domain experts. In the fastest growing technology world, most smartphone users spend major time in the messaging apps such as Facebook messenger. A chatbot is a computer program that uses messaging channels to interact with users using natural Languages. Chatbot uses appropriate mapping techniques to transform user inputs into a relational database and fetch the data by calling an existing API and then sends an appropriate response to the user to drive its chats. Drawbacks include the need to learn and use chatbot specific languages such as AIML (Artificial Intelligence Markup Language), high botmaster interference, and the use of non-matured technology. In this project, Facebook messenger based chatbot is proposed to provide domain independent, an easy to use, smart, scalable, dynamic and conversational agent in order to get information about CSUSB. It has the unique functionalities which identify user interactions made by their natural language, and the flawless support of various application domains. This provides an ample of unique scalabilities and abilities that will be evaluated in the future phases of this project.
APA, Harvard, Vancouver, ISO, and other styles
49

Popescu, Vladimir. "Formalisation des contraintes pragmatiques pour la génération des énoncés en dialogue homme-machine multi-locuteurs." Phd thesis, Grenoble INPG, 2008. http://tel.archives-ouvertes.fr/tel-00343846.

Full text
Abstract:
Nous avons développé un cadre pour contrôler la génération des énoncés en dialogue homme-machine multi-locuteurs.‭ ‬Ce processus se déroule en quatre étapes‭ ‬:‭ (‬i‭) ‬la structure rhétorique du dialogue est calculée,‭ ‬en utilisant une émulation de‭ ‬la SDRT‭ (<< ‬Segmented Discourse Representation Theory‭ >>) ; (‬ii‭) ‬cette structure est utilisée pour calculer les engagements des locuteurs‭ ; ‬ces engagements sont utilisés pour piloter le réglage de la force illocutoire des énoncés‭ ; (‬iii‭) ‬les engagements sont filtrés et placés dans une pile pour chaque locuteur‭ ; ‬ces piles sont utilisées pour effectuer des ellipses sémantiques‭ ; (‬iv‭) ‬la structure rhétorique pilote le choix des connecteurs concessifs‭ (‬mais,‭ ‬quand même,‭ ‬pourtant et bien que‭) ‬entre les énoncés‭ ; ‬pour ce faire,‭ ‬les énoncés sont ordonnés du point de vue argumentatif.
APA, Harvard, Vancouver, ISO, and other styles
50

Popescu, Vladimir. "Formalisation des contraintes pragmatiques pour la génération des énoncés en dialogue homme-machine multi-locuteurs." Phd thesis, Grenoble INPG, 2008. http://www.theses.fr/2008INPG0175.

Full text
Abstract:
Nous avons développé un cadre pour contrôler la génération des énoncés en dialogue homme-machine multi-locuteurs. Ce processus se déroule en quatre étapes : (i) la structure rhétorique du dialogue est calculée, en utilisant une émulation de la SDRT (<< Segmented Discourse Representation Theory >>) ; (ii) cette structure est utilisée pour calculer les engagements des locuteurs ; ces engagements sont utilisés pour piloter le réglage de la force illocutoire des énoncés ; (iii) les engagements sont filtrés et placés dans une pile pour chaque locuteur ; ces piles sont utilisées pour effectuer des ellipses sémantiques ; (iv) la structure rhétorique pilote le choix des connecteurs concessifs (mais, quand même, pourtant et bien que) entre les énoncés ; pour ce faire, les énoncés sont ordonnés du point de vue argumentatif
We have developed a framework for controlling utterance generation in multi-party human-computer dialogue. This process takes place in four stages: (i) the rhetorical structure for the dialogue is computed, by using an emulation of SDRT ("Segmented Discourse Representation Theory"); (ii) this structure is used for computing speakers' commitments; these commitments are used for driving the process of adjusting the illocutionary force degree of the utterances; (iii) the commitments are filtered and placed in a stack for each speaker; these stacks are used for performing semantic ellipses; (iv) the discourse structure drives the choice of concessive connectors (mais, quand même, pourtant and bien que) between utterances; to do this, the utterances are ordered from an argumentative viewpoint
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography