Дисертації з теми "Machine theory of collective intelligence"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся з топ-50 дисертацій для дослідження на тему "Machine theory of collective intelligence".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.
Ekpe, Bassey. "Theories of collective intelligence and decision-making : towards a viable United Nations intelligence system." Thesis, University of Huddersfield, 2005. http://eprints.hud.ac.uk/id/eprint/7481/.
Повний текст джерелаCarlucci, Lorenzo. "Some cognitively-motivated learning paradigms in Algorithmic Learning Theory." Access to citation, abstract and download form provided by ProQuest Information and Learning Company; downloadable PDF file 0.68 Mb., p, 2006. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&res_dat=xri:pqdiss&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&rft_dat=xri:pqdiss:3220797.
Повний текст джерелаGramer, Rachel. "A GENRE OF COLLECTIVE INTELLIGENCE: BLOGS AS INTERTEXTUAL, RECIPROCAL, AND PEDAGOGICAL." Master's thesis, University of Central Florida, 2008. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/2341.
Повний текст джерелаM.A.
Department of English
Arts and Humanities
English MA
Lu, Yibiao. "Statistical methods with application to machine learning and artificial intelligence." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/44730.
Повний текст джерелаRiedel, Marion, and Tino Schwarze. "Machine Translation (MT) - History, Theory, Problems and Usage." Universitätsbibliothek Chemnitz, 2001. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-200100437.
Повний текст джерелаGulcehre, Caglar. "Two Approaches For Collective Learning With Language Games." Master's thesis, METU, 2011. http://etd.lib.metu.edu.tr/upload/12613109/index.pdf.
Повний текст джерелаs naming game. The emergence of categories throughout interactions between a population of agents in the categorization games are analyzed. The test results of categorization games as a model combination algorithm with various machine learning algorithms on different data sets have shown that categorization games can have a comparable performance with fast convergence.
Shi, Bin. "A Mathematical Framework on Machine Learning: Theory and Application." FIU Digital Commons, 2018. https://digitalcommons.fiu.edu/etd/3876.
Повний текст джерелаGeorgescu, Mihai [Verfasser]. "When in doubt ask the crowd : leveraging collective intelligence for improving event detection and machine learning / Mihai Georgescu." Hannover : Technische Informationsbibliothek und Universitätsbibliothek Hannover (TIB), 2015. http://d-nb.info/107359663X/34.
Повний текст джерелаAhlberg, Helgee Ernst. "Improving drug discovery decision making using machine learning and graph theory in QSAR modeling." Göteborg : Dept. of Chemistry, University of Gothenburg, 2010. http://gupea.ub.gu.se/dspace/handle/2077/21838.
Повний текст джерелаLucking, Walter. "The application of time encoded signals to automated machine condition classification using neural networks." Thesis, University of Hull, 1997. http://hydra.hull.ac.uk/resources/hull:3766.
Повний текст джерелаPerrot, Michaël. "Theory and algorithms for learning metrics with controlled behaviour." Thesis, Lyon, 2016. http://www.theses.fr/2016LYSES072/document.
Повний текст джерелаMany Machine Learning algorithms make use of a notion of distance or similarity between examples to solve various problems such as classification, clustering or domain adaptation. Depending on the tasks considered these metrics should have different properties but manually choosing an adapted comparison function can be tedious and difficult. A natural trend is then to automatically tailor such metrics to the task at hand. This is known as Metric Learning and the goal is mainly to find the best parameters of a metric under some specific constraints. Standard approaches in this field usually focus on learning Mahalanobis distances or Bilinear similarities and one of the main limitations is that the control over the behaviour of the learned metrics is often limited. Furthermore if some theoretical works exist to justify the generalization ability of the learned models, most of the approaches do not come with such guarantees. In this thesis we propose new algorithms to learn metrics with a controlled behaviour and we put a particular emphasis on the theoretical properties of these algorithms. We propose four distinct contributions which can be separated in two parts, namely (i) controlling the metric with respect to a reference metric and (ii) controlling the underlying transformation corresponding to the learned metric. Our first contribution is a local metric learning method where the goal is to regress a distance proportional to the human perception of colors. Our approach is backed up by theoretical guarantees on the generalization ability of the learned metrics. In our second contribution we are interested in theoretically studying the interest of using a reference metric in a biased regularization term to help during the learning process. We propose to use three different theoretical frameworks allowing us to derive three different measures of goodness for the reference metric. These measures give us some insights on the impact of the reference metric on the learned one. In our third contribution we propose a metric learning algorithm where the underlying transformation is controlled. The idea is that instead of using similarity and dissimilarity constraints we associate each learning example to a so-called virtual point belonging to the output space associated with the learned metric. We theoretically show that metrics learned in this way generalize well but also that our approach is linked to a classic metric learning method based on pairs constraints. In our fourth contribution we also try to control the underlying transformation of a learned metric. However instead of considering a point-wise control we consider a global one by forcing the transformation to follow the geometrical transformation associated to an optimal transport problem. From a theoretical standpoint we propose a discussion on the link between the transformation associated with the learned metric and the transformation associated with the optimal transport problem. On a more practical side we show the interest of our approach for domain adaptation but also for a task of seamless copy in images
Pajany, Peroumal. "AI Transformative Influence: Extending the TRAM to Management Student's AI’s Machine Learning Adoption." Franklin University / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=frank1623093426530669.
Повний текст джерелаYu, Shen. "A Bayesian machine learning system for recognizing group behaviour." Thesis, McGill University, 2009. http://digitool.Library.McGill.CA:8881/R/?func=dbin-jump-full&object_id=32565.
Повний текст джерелаMiddleton, Steven Anthony, and smi81431@bigpond net au. "A limited study of mechanical intelligence as media." RMIT University. Creative Media, 2008. http://adt.lib.rmit.edu.au/adt/public/adt-VIT20080717.161751.
Повний текст джерелаDuminy, Willem H. "A learning framework for zero-knowledge game playing agents." Pretoria : [s.n.], 2006. http://upetd.up.ac.za/thesis/available/etd-10172007-153836.
Повний текст джерелаGu, Tianyu. "Shelang : An Implementation of Probabilistic Programming Language and its Applications." Thesis, Mittuniversitetet, Avdelningen för informations- och kommunikationssystem, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-26016.
Повний текст джерелаEmele, Chukwuemeka David. "Informing dialogue strategy through argumentation-derived evidence." Thesis, University of Aberdeen, 2011. http://digitool.abdn.ac.uk:80/webclient/DeliveryManager?pid=179453.
Повний текст джерелаPiquemal-Baluard, Christine. "L'explication collective dans une société d'agents : conception d'un agent explicatif pour l'environnement SYNERGIC." Toulouse 3, 1994. http://www.theses.fr/1994TOU30064.
Повний текст джерелаGao, Xi. "Graph-based Regularization in Machine Learning: Discovering Driver Modules in Biological Networks." VCU Scholars Compass, 2015. http://scholarscompass.vcu.edu/etd/3942.
Повний текст джерелаBanda, Brandon Mathewe. "General Game Playing as a Bandit-Arms Problem: A Multiagent Monte-Carlo Solution Exploiting Nash Equilibria." Oberlin College Honors Theses / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=oberlin1559142912626158.
Повний текст джерелаDoran, Gary Brian Jr. "Multiple-Instance Learning from Distributions." Case Western Reserve University School of Graduate Studies / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=case1417736923.
Повний текст джерелаWeninger, Timothy Edwards. "Link discovery in very large graphs by constructive induction using genetic programming." Thesis, Manhattan, Kan. : Kansas State University, 2008. http://hdl.handle.net/2097/1087.
Повний текст джерелаBerisha, Visar. "AI as a Threat to Democracy : Towards an Empirically Grounded Theory." Thesis, Uppsala universitet, Statsvetenskapliga institutionen, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-340733.
Повний текст джерелаHazarika, Subhashis. "Statistical and Machine Learning Approaches For Visualizing and Analyzing Large-Scale Simulation Data." The Ohio State University, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=osu1574692702479196.
Повний текст джерелаStreeter, Matthew J. "Automated discovery of numerical approximation formulae via genetic programming." Link to electronic thesis, 2001. http://www.wpi.edu/Pubs/ETD/Available/etd-0426101-231555.
Повний текст джерелаTitle from title screen. Keywords: genetic programming; approximations; machine learning; artificial intelligence. Includes bibliographical references (p. 92-94).
Crocker, Matthew Walter. "A principle-based system for natural language analysis and translation." Thesis, University of British Columbia, 1988. http://hdl.handle.net/2429/27863.
Повний текст джерелаScience, Faculty of
Computer Science, Department of
Graduate
Billings, Dr Donald G. "Disruptive Innovation Within the Legal Services Ecosystem." ScholarWorks, 2019. https://scholarworks.waldenu.edu/dissertations/7119.
Повний текст джерелаMbambe, Bebey Danielle. "Design d'expériences transmédia pour l'engagement en formation (DEEXTEF)." Thesis, Paris, CNAM, 2018. http://www.theses.fr/2018CNAM1215/document.
Повний текст джерелаWe describe the phenomenon of engagement through co-constructed transmedia experiences with the beneficiaries in the context of adult education. We approach this ground on the assumption that a transmedia with experiential value makes it possible to increase the participation of subjects to consolidate the commitment in training. This hypothesis opens up the prospect of a transmedia type of mediation capable of integrating the objectives of scientific exploitation for the commitment and enhancement of participation and the attention that could be interesting for other corpuses. Based on an analysis framework focused on the beneficiaries of transactions, our survey highlighted different forms of hybrid transmedia engagement with specific characteristics. The complementarity of these transmedia has favoured various commitment regimes observed on an ad hoc basis for a long-term commitment
Stephanos, Dembe. "Machine Learning Approaches to Dribble Hand-off Action Classification with SportVU NBA Player Coordinate Data." Digital Commons @ East Tennessee State University, 2021. https://dc.etsu.edu/etd/3908.
Повний текст джерелаMachart, Pierre. "Coping with the Computational and Statistical Bipolar Nature of Machine Learning." Phd thesis, Aix-Marseille Université, 2012. http://tel.archives-ouvertes.fr/tel-00771718.
Повний текст джерелаJambeiro, Filho Jorge Eduardo de Schoucair. "Tratamento bayesiano de interações entre atributos de alta cardinalidade." [s.n.], 2007. http://repositorio.unicamp.br/jspui/handle/REPOSIP/276204.
Повний текст джерелаTese (doutorado) - Universidade Estadual de Campinas, Instituto de Computação
Made available in DSpace on 2018-08-09T21:11:41Z (GMT). No. of bitstreams: 1 JambeiroFilho_JorgeEduardodeSchoucair_D.pdf: 736285 bytes, checksum: b7d7f186f743f9b0e541c857b0ca8226 (MD5) Previous issue date: 2007
Resumo: Analisamos o uso de métodos Bayesianos em um problema de classificação de padrões de interesse prático para a Receita Federal do Brasil que é caracterizado pela presença de atributos de alta cardinalidade e pela existência de interações relevantes entre eles. Mostramos que a presença de atributos de alta cardinalidade pode facilmente gerar tantas subdivisões no conjunto de treinamento que, mesmo tendo originalmente uma grande quantidade de dados, acabemos obtendo probabilidades pouco confiáveis, inferidas a partir de poucos exemplos. Revisamos as estratégias usualmente adotadas para lidar com esse problema dentro do universo Bayesiano, exibindo sua dependência em suposições de não interação inaceitáveis em nosso domínio alvo. Mostramos empiricamente que estratégias Bayesianas mais avançadas para tratamento de atributos de alta cardinalidade, como pré-processamento para redução de cardinalidade e substituição de tabelas de probabilidades condicionais (CPTs) de redes Bayesianas (BNs) por tabelas default (DFs), árvores de decisão (DTs) e grafos de decisão (DGs) embora tragam benefícios pontuais não resultam em ganho de desempenho geral em nosso domínio alvo. Propomos um novo método Bayesiano de classificação, chamado de hierarchical pattern Bayes (HPB), que calcula probabilidades posteriores para as classes dado um padrão W combinando as observações de W no conjunto de treinamento com probabilidades prévias que são obtidas recursivamente a partir das observações de padrões estritamente mais genéricos que W. Com esta estratégia, ele consegue capturar interações entre atributos de alta cardinalidade quando há dados suficientes para tal, sem gerar probabilidades pouco confiáveis quando isso não ocorre. Mostramos empiricamente que, em nosso domínio alvo, o HPB traz benefícios significativos com relação a redes Bayesianas com estruturas populares como o naïve Bayes e o tree augmented naïve Bayes, com relação a redes Bayesianas (BNs) onde as tabelas de probabilidades condicionais foram substituídas pelo noisy-OR, por DFs, por DTs e por DGs, e com relação a BNs construídas, após uma fase de redução de cardinalidade usando o agglomerative information bottleneck. Além disso, explicamos como o HPB, pode substituir CPTs e mostramos com testes em outro problema de interesse prático que esta substituição pode trazer ganhos significativos. Por fim, com testes em vários conjuntos de dados públicos da UCI, mostramos que a utilidade do HPB ser bastante ampla
Abstract: In this work, we analyze the use of Bayesian methods in a pattern classification problem of practical interest for Brazil¿s Federal Revenue which is characterized by the presence of high cardinality attributes and by the existence of relevant interactions among them.We show that the presence of high cardinality attributes can easily produce so many subdivisions in the training set that, even having originally a great amount of data, we end up with unreliable probability estimates, inferred from small samples. We cover the most common strategies to deal with this problem within the Bayesian universe and show that they rely strongly on non interaction assumptions that are unacceptable in our target domain. We show empirically that more advanced strategies to handle high cardinality attributes like cardinality reduction by preprocessing and conditional probability tables replacement with default tables, decision trees and decision graphs, in spite of some restricted benefits, do not improve overall performance in our target domain. We propose a new Bayesian classification method, named hierarchical pattern Bayes (HPB), which calculates posterior class probabilities given a pattern W combining the observations of W in the training set with prior class probabilities that are obtained recursively from the observations of patterns that are strictly more generic than W. This way, it can capture interactions among high cardinality attributes when there is enough data, without producing unreliable probabilities when there is not. We show empirically that, in our target domain, HPB achieves significant performance improvements over Bayesian networks with popular structures like naïve Bayes and tree augmented naïve Bayes, over Bayesian networks where traditional conditional probability tables were substituted by noisy-OR gates, default tables, decision trees and decision graphs, and over Bayesian networks constructed after a cardinality reduction preprocessing phase using the agglomerative information bottleneck method. Moreover, we explain how HPB can replace conditional probability tables of Bayesian Networks and show, with tests in another practical problem, that such replacement can result in significant benefits. At last, with tests over several UCI datasets we show that HPB may have a quite wide applicability
Doutorado
Sistemas de Informação
Doutor em Ciência da Computação
Zantedeschi, Valentina. "A Unified View of Local Learning : Theory and Algorithms for Enhancing Linear Models." Thesis, Lyon, 2018. http://www.theses.fr/2018LYSES055/document.
Повний текст джерелаIn Machine Learning field, data characteristics usually vary over the space: the overall distribution might be multi-modal and contain non-linearities.In order to achieve good performance, the learning algorithm should then be able to capture and adapt to these changes. Even though linear models fail to describe complex distributions, they are renowned for their scalability, at training and at testing, to datasets big in terms of number of examples and of number of features. Several methods have been proposed to take advantage of the scalability and the simplicity of linear hypotheses to build models with great discriminatory capabilities. These methods empower linear models, in the sense that they enhance their expressive power through different techniques. This dissertation focuses on enhancing local learning approaches, a family of techniques that infers models by capturing the local characteristics of the space in which the observations are embedded. The founding assumption of these techniques is that the learned model should behave consistently on examples that are close, implying that its results should also change smoothly over the space. The locality can be defined on spatial criteria (e.g. closeness according to a selected metric) or other provided relations, such as the association to the same category of examples or a shared attribute. Local learning approaches are known to be effective in capturing complex distributions of the data, avoiding to resort to selecting a model specific for the task. However, state of the art techniques suffer from three major drawbacks: they easily memorize the training set, resulting in poor performance on unseen data; their predictions lack of smoothness in particular locations of the space;they scale poorly with the size of the datasets. The contributions of this dissertation investigate the aforementioned pitfalls in two directions: we propose to introduce side information in the problem formulation to enforce smoothness in prediction and attenuate the memorization phenomenon; we provide a new representation for the dataset which takes into account its local specificities and improves scalability. Thorough studies are conducted to highlight the effectiveness of the said contributions which confirmed the soundness of their intuitions. We empirically study the performance of the proposed methods both on toy and real tasks, in terms of accuracy and execution time, and compare it to state of the art results. We also analyze our approaches from a theoretical standpoint, by studying their computational and memory complexities and by deriving tight generalization bounds
Lallée, Stéphane. "Towards a distributed, embodied and computational theory of cooperative interaction." Thesis, Lyon 1, 2012. http://www.theses.fr/2012LYO10052/document.
Повний текст джерелаRobots will gradually integrate our homes wielding the role of companions, humanoids ornot. In order to cope with this status they will have to adapt to the user, especially bylearning knowledge or skills from him that they may lack. In this context, their interactionshould be natural and evoke the same cooperative mechanisms that humans use. At thecore of those mechanisms is the concept of action: what is an action, how do humansrecognize them, how they produce or describe them? The modeling of aspects of thesefunctionalities will be the basis of this thesis and will allow the implementation of higherlevel cooperative mechanisms. One of these is the ability to handle “shared plans” whichallow two (or more) individuals to cooperate in order to reach a goal shared by all.Throughout the thesis I will attempt to make links between the human development ofthese capabilities, their neurophysiology, and their robotic implementation. As a result ofthis work, I will present a fundamental difference between the representation of knowledgein humans and machines, still in the framework of cooperative interaction: the possibledissociation of a robot body and its cognition, which is not easily imaginable for humans.This dissociation will lead me to explore the “shared experience framework, a situationwhere a central artificial cognition manages the shared knowledge of multiple beings, eachof them owning some kind of individuality. In the end this phenomenon will interrogate thevarious philosophies of mind by asking the question of the attribution of a mind to amachine and the consequences of such a possibility regarding the human mind
Jones, Joshua K. "Empirically-based self-diagnosis and repair of domain knowledge." Diss., Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/33931.
Повний текст джерелаArruda, Rodrigo Lopes Setti de. "Uma arquitetura híbrida aplicada em problemas de aprendizagem por reforço." [s.n.], 2012. http://repositorio.unicamp.br/jspui/handle/REPOSIP/259078.
Повний текст джерелаDissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de Computação
Made available in DSpace on 2018-08-20T00:09:41Z (GMT). No. of bitstreams: 1 Arruda_RodrigoLopesSettide_M.pdf: 2295891 bytes, checksum: 4f5f4bc8f219b0c3c27239520027d496 (MD5) Previous issue date: 2012
Resumo: Com o uso de sistemas cognitivos em uma crescente gama de aplicações, criou-se uma grande expectativa e elevada demanda por máquinas cada vez mais autônomas, inteligentes e criativas na solução de problemas reais. Em diversos casos, os desafios demandam capacidade de aprendizado e adaptação. Este trabalho lida com conceitos de aprendizagem por reforço e discorre sobre as principais abordagens de solução e variações de problemas. Em seguida, constrói uma proposta híbrida incorporando outras ideias em aprendizagem de máquina, validando-a com experimentos simulados. Os experimentos permitem apontar as principais vantagens da metodologia proposta, a qual está fundamentada em sua capacidade de lidar com cenários de espaços contínuos e, também, de aprender uma política ótima enquanto segue outra, exploratória. A arquitetura proposta é híbrida, baseada em uma rede neural perceptron multi-camadas acoplada a um aproximador de funções denominado wirefitting. Esta arquitetura é coordenada por um algoritmo adaptativo e dinâmico que une conceitos de programação dinâmica, análise de Monte Carlo, aprendizado por diferença temporal e elegibilidade. O modelo proposto é utilizado para resolver problemas de controle ótimo, por meio de aprendizagem por reforço, em cenários com variáveis contínuas e desenvolvimento não-linear. Duas instâncias diferentes de problemas de controle, reconhecidas na literatura pertinente, são apresentadas e testadas com a mesma arquitetura
Abstract: With the evergrowing use of cognitive systems in various applications, it has been created a high expectation and a large demand for machines more and more autonomous, intelligent and creative in real world problem solving. In several cases, the challenges ask for high adaptive and learning capability. This work deals with the concepts of reinforcement learning, and reasons on the main solution approaches and problem variations. Subsequently, it builds a hybrid proposal incorporating other machine learning ideas, so that the proposal is validated with simulated experiments. The experiments allow to point out the main advantages of the proposed methodology, founded on its capability to handle continuous space environments, and also to learn an optimal policy while following an exploratory policy. The proposed architecture is hybrid in the sense that it is based on a multi-layer perceptron neural network coupled with a function approximator called wire-fitting. The referred architecture is coordinated by a dynamic and adaptive algorithm which merges concepts from dynamic programming, Monte Carlo analysis, temporal difference learning, and eligibility. The proposed model is used to solve optimal control problems, by means of reinforcement learning, in scenarios endowed with continuous variables and nonlinear development. Two different instances of control problems, well discussed in the pertinent literature, are presented and tested with the same architecture
Mestrado
Engenharia de Computação
Mestre em Engenharia Elétrica
Coursey, Kino High. "An Approach Towards Self-Supervised Classification Using Cyc." Thesis, University of North Texas, 2006. https://digital.library.unt.edu/ark:/67531/metadc5470/.
Повний текст джерелаVurkaç, Mehmet. "Prestructuring Multilayer Perceptrons based on Information-Theoretic Modeling of a Partido-Alto-based Grammar for Afro-Brazilian Music: Enhanced Generalization and Principles of Parsimony, including an Investigation of Statistical Paradigms." PDXScholar, 2011. https://pdxscholar.library.pdx.edu/open_access_etds/384.
Повний текст джерелаBarraquand, Rémi. "Designing Sociable Technologies." Thesis, Grenoble, 2012. http://www.theses.fr/2012GRENM010/document.
Повний текст джерелаThis thesis investigates the design of sociable technologies and is divided into three main parts described below. In the first part, we introduce sociable technologies. We review our the definition of technology and propose categories of technologies according to the motivation underlying their design: improvement of control, improvement of communication or improvement of cooperation. Sociable technologies are then presented as an extension of techniques to improve cooperation. The design of sociable technologies are then discussed leading to the observation that acquisition of social common sense is a key challenge for designing sociable technologies. Finally, polite technologies are presented as an approach for acquiring social common sense. In the second part, we focus on the premises for the design of sociable technologies. A key aspect of social common sense is the ability to act appropriately in social situations. Associating appropriate behaviour with social situations is presented as a key method for implementing polite technologies. Reinforcement learning is proposed as a method for learning such associations and variation of this algorithm are experimentally evaluated. Learning the association between situation and behaviour relies on the strong assumption that mutual understanding of social situations can be achieved between technologies and people during interaction. We argue that in order to design sociable technologies, we must change the model of communication used by our technologies. We propose to replace the well-known code model of communication, with the ostensive-inferential model proposed by Sperber and Wilson. Hypotheses raised by this approach are evaluated in an experiment conducted in a smart environment, where, subjects by group of two or three are asked to collaborate with a smart environment in order to teach it how to behave in an automated meeting. A novel experimental methodology is presented: The Sorceress of Oz. The results collected from this experiment validate our hypothesis and provide insightful information for the design. We conclude by presenting, what we believe are, the premises for the design of sociable technologies. The final part of the thesis concerns an infrastructure for the design of sociable technologies. This infrastructure provides the support for three fundamental components. First, it provides the support for an inferential model of context. This inferential model of context is presented; a software architecture is proposed and evaluated in an experiment conducted in a smart-environment. Second, it provides the support for reasoning by analogy and introduces the concept of eigensituations. The advantage of this representation are discussed and evaluated in an experiment. Finally, it provides the support for ostensive-inferential communication and introduces the concept of ostensive interface
Bertin, Clarice. "Driving factors for symbiotic collaborations between startups and large firms in open innovation ecosystems." Thesis, Strasbourg, 2020. https://publication-theses.unistra.fr/restreint/theses_doctorat/Bertin_Clarice_2020_ED221.pdf.
Повний текст джерелаCollaboration between startups and large firms is becoming increasingly necessary in the current context of open innovation, accelerating market demand and thus the increasingly rapid race to innovate. These asymmetrical partners, however, present significant differences that can generate a distance between them that can jeopardize the collaboration project. Beyond the dyad, other actors of the ecosystem, in particular innovation intermediaries, also participate in the collaborative project. The objective of this thesis is to bring out the factors fostering symbiotic collaboration between startups and large firms, based on an organizational and financial independence of the actors. This thesis also aims to show the interest of using the analogy with the biological symbiosis between symbionts interacting in a given ecosystem. The aim is thus to highlight the balance factors of the relationship, in a win-win perspective. Starting from the differences brought to light through cognitive distance, this research proposes to study the phenomenon of startup - large firm collaboration according to an exploratory approach and a mixed qualitative and quantitative method, based on the case method. The study of 38 cases carried out (leading to a data collection from 53 respondents in the form of interviews and survey) proposes a time-based, multi-perspective and holistic approach, mobilizing the theoretical framework of proximity (geographical, cognitive, social, organizational) and that of dynamic capabilities. This research resulted in four articles leading to several theoretical and managerial contributions. Firstly, the study from the startup's perspective allowed to identify the factors fostering proximity and collaboration between startups and large firms according to four levels: intra-organizational of the large firm, intra-organizational of the startup, inter-organizational and ecosystemic. Further exploration has then highlighted the complementary skills of startup founding teams, compared to solo startuppers, which is a source of proximity to large firms. The continuation of the study, from the perspective of large firms, brought to light the importance of a management based on collective intelligence as well as the evolving role of middle managers in large firms in the implementation of an open innovation strategy integrating a variety of actors, such as startups. Finally, the study of the perspective of innovation intermediaries regarding their roles in the development of startup - large firm collaboration has allowed these different roles to emerge according to three phases of the collaboration construction, including that of constituting an external resource for the large firm for the regeneration of its dynamic capabilities. A transversal contribution is also the identification and operationalization of the 2+1 phases of the collaboration along a chronological axis: the Upstream, Design and Process phases of the collaboration
Russo, Nicholas A. "DiSH: Democracy in State Houses." DigitalCommons@CalPoly, 2019. https://digitalcommons.calpoly.edu/theses/1967.
Повний текст джерелаMartin, Cyrille. "Composition flexible par planification automatique." Phd thesis, Université de Grenoble, 2012. http://tel.archives-ouvertes.fr/tel-00864000.
Повний текст джерелаSalem, Tawfiq. "Learning to Map the Visual and Auditory World." UKnowledge, 2019. https://uknowledge.uky.edu/cs_etds/86.
Повний текст джерелаSerafim, Eduardo Paz. "CollectMed: Extração e Reuso de Conhecimento Coletivo para o Registro Eletrônico em Saúde." Universidade Federal da Paraíba, 2011. http://tede.biblioteca.ufpb.br:8080/handle/tede/6045.
Повний текст джерелаCoordenação de Aperfeiçoamento de Pessoal de Nível Superior
Several technological advances during recent years provided that the Electronic Health Record systems (EHR) became a solidified and viable alternative to replace progres-sively and efficiently, the use of health records on paper. The benefits found are associated with the use of methods for clinical decision support (CDS), data availability, ease in finding information, among other advantages inherent in computerized systems use. However, there are still many challenges and research to get the full potential of such systems. For example, the amounts of clinical data for EHR storage are very high. Several interests might benefit if there was a tool capable of performing an automated analysis, or more commonly found, semi-automated, useful for search patterns in the data set stored in the system. Several studies indicate that efforts in the field of machine learning achieve great results in various areas including clinical information. However, the effort required is still high, increasing the time spent with planning and processing, with high costs and large amounts of data needed for processing. This work, in association with the OpenCTI's CDS seeks to significantly reduce the amount of effort necessary to promote both the reuse of clin-ical information from the automatic learning, and the development of mechanisms for clini-cal decision support with low cost. This study seeks to offer those benefits to users of EHR systems, through a simple mechanism, but extensive, for analysis of clinical data stored in clinical databases. This anal-ysis is performed using a methodology of knowledge extraction algorithms using collective intelligence or data mining, through steps of search, selection, preprocessing, modeling, evaluation and application of the information extracted from these systems. From this, me-chanisms for clinical decision support of EHR, may use the framework offered by CollectMed to promote with greater ease and precision, more accurate information regarding specific medical conditions on their patients, according to what has already been registered by health professionals in similar cases using the EHR.
Diversos avanços tecnológicos ocorridos nos últimos anos fizeram com que os Sis-temas de Registro Eletrônico em Saúde (RES) se consolidassem como uma alternativa viável para substituir, progressivamente e com eficiência, o uso dos registros de saúde em papel. Os benefícios encontrados são associados ao uso de métodos de apoio à decisão clínica, disponi-bilidade dos dados, facilidade na busca por informações, entre outras vantagens inerentes ao uso de sistemas computadorizados. Entretanto, existem ainda, muitos desafios e pesquisas para fazer com que todo o potencial desses sistemas seja utilizado. Por exemplo, a quantida-de de dados clínicos que os sistemas de RES armazenam, é muito elevado. Diversos interes-ses poderiam ser beneficiados, caso houvesse uma ferramenta capaz de realizar uma análise automatizada, ou semi-automatizada (como é mais comumente encontrada), para buscar padrões úteis no conjunto de dados armazenados no sistema. Diversos trabalhos apontam que os esforços realizados no campo de aprendizado automático alcançam ótimos resultados em diversas áreas, inclusive para informações clíni-cas. Porém, o esforço necessário ainda é elevado, aumentando o tempo dedicado ao planeja-mento e execução, assim como altos custos e necessidade de grande volume de dados para o processamento. Este trabalho, associado ao sistema de apoio à decisão do OpenCTI busca reduzir, significativamente,o esforço necessário para promover tanto o reuso de informações clínicas a partir do aprendizado automático, quanto o desenvolvimento de mecanismos de apoio à decisão clínica a um baixo custo. O presente trabalho, busca oferecer tal benefício aos usuários de sistemas de RES, por meio de um mecanismo simples, porém amplo, de análise dos dados clínicos armazena-dos nos bancos de dados dos RES. Essa análise será realizada por meio de uma metodologia de extração de conhecimento, utilizando algoritmos de inteligência coletiva ou data mining, passando por etapas de busca, seleção, pré-processamento, modelagem, avaliação e aplicação destas informações extraídas dos sistemas. A partir disso, mecanismos de apoio à decisão clínica dos RES, poderão utilizar o arcabouço oferecido pelo CollectMed para promover, com mais facilidade e precisão, recuperação de informações mais apuradas a respeito das condi-ções clínicas específicas sobre seus pacientes, de acordo com o que já foi registrado por pro-fissionais de saúde em casos clínicos semelhantes persistidos no RES.
Magnuson, Markus Amalthea. "Frihet, jämlikhet, cyborgskap : Drömmen om den mänskligare människan." Thesis, Stockholms universitet, Filmvetenskapliga institutionen, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-179043.
Повний текст джерелаLima, Clodoaldo Aparecido de Moraes. "Comite de maquinas : uma abordagem unificada empregando maquinas de vetores-suporte." [s.n.], 2004. http://repositorio.unicamp.br/jspui/handle/REPOSIP/261258.
Повний текст джерелаTese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e de Computação
Made available in DSpace on 2018-08-04T02:17:19Z (GMT). No. of bitstreams: 1 Lima_ClodoaldoAparecidodeMoraes_D.pdf: 5444612 bytes, checksum: 0172ca4143c2737bf19a3b2076c06b44 (MD5) Previous issue date: 2004
Resumo: Os algoritmos baseados em métodos de kernel destacam-se entre as diversas técnicas de aprendizado de máquina. Eles foram inicialmente empregados na implementação de máquinas de vetores-suporte (SVMs). A abordagem SVM representa um procedimento de aprendizado não-paramétrico para classificação e regressão de alto desempenho. No entanto, existem aspectos estruturais e paramétricos de projeto que podem conduzir a uma degradação de desempenho. Na ausência de uma metodologia sistemática e de baixo custo para a proposição de modelos computacionais otimamente especificados, os comitês de máquinas se apresentam como alternativas promissoras. Existem versões estáticas de comitês, na forma de ensembles de componentes, e versões dinâmicas, na forma de misturas de especialistas. Neste estudo, os componentes de um ensemble e os especialistas de uma mistura são tomados como SVMs. O objetivo é explorar conjuntamente potencialidades advindas de SVM e comitê de máquinas, adotando uma formulação unificada. Várias extensões e novas configurações de comitês de máquinas são propostas, com análises comparativas que indicam ganho significativo de desempenho frente a outras propostas de aprendizado de máquina comumente adotadas para classificação e regressão
Abstract: Algorithms based on kernel methods are prominent techniques among the available approaches for machine learning. They were initially applied to implement support vector machines (SVMs). The SVM approach represents a nonparametric learning procedure devoted to high performance classification and regression tasks. However, structural and parametric aspects of the design may guide to performance degradation. In the absence of a systematic and low-cost methodology for the proposition of optimally specified computational models, committee machines emerge as promising alternatives. There exist static versions of committees, in the form of ensembles of components, and dynamic versions, in the form of mixtures of experts. In the present investigation, the components of an ensemble and the experts of a mixture are taken as SVMs. The aim is to jointly explore the potentialities of both SVM and committee machine, by means of a unified formulation. Several extensions and new configurations of committee machines are proposed, with comparative analyses that indicate significant gain in performance before other proposals for machine learning commonly adopted for classification and regression
Doutorado
Engenharia de Computação
Doutor em Engenharia Elétrica
Åkerström, Otto. "Multi-Agent System for Coordinated Defence." Thesis, KTH, Matematisk statistik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-273582.
Повний текст джерелаDagens försvarssystem blir allt mer komplexa när tekniken utvecklas och det blir allt viktigare att utforska nya sätt att lösa problem för att ha ett toppmodernt försvar. I synnerhet används Artificiell intelligens (AI) i ett ökande antal branscher så som logistik, lagerhantering och försvar. Detta arbete kommer att utvärdera möjligheten att använda Förstärkt inlärning (RL) i ett Koordinerat luftförsvar (ADC) scenario hos Saab AB. För att utvärdera RL, löses ett förenklat ADC-scenario med två olika metoder, Q-learning och Deep Q-learning (DQL). Resultatet av de två metoderna diskuteras så väl som begränsningar för Q-learning. Å andra sidan visar sig DQL vara relativt enkelt att tillämpa i ett mer komplext scenario. Slutligen görs ett sista experiment med ett mycket mer komplicerat scenario för att visa skalbarheten för DQL och skapa en naturlig övergång till framtida arbete.
De, Wulf Martin. "From timed models to timed implementations." Doctoral thesis, Universite Libre de Bruxelles, 2006. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210797.
Повний текст джерелаComputer Science is currently facing a grand challenge :finding good design practices for embedded systems. Embedded systems are essentially computers interacting with some physical process. You could find one in a braking systems or in a nuclear power plant for example. They present several design difficulties :first they are reactive systems, interacting indefinitely with their environment. Second,they must satisfy real-time constraints specifying when they should respond, and not only how. Finally, their environment is often deeply continuous, presenting complex dynamics. The formal models of choice for specifying such systems are timed and hybrid automata for which model checking is pretty well studied.
In a first part of this thesis, we study a complete design approach, including verification and code generation, for timed automata. We have to define a new semantics for timed automata, the AASAP semantics, that preserves the decidability properties for model checking and at the same time is implementable. Our notion of implementability is completely novel, and relies on the simulation of a semantics that is obviously implementable on a real platform. We wrote tools for the analysis and code generation and exemplify them on a case study about the well known Philips Audio Control Protocol.
In a second part of this thesis, we study the problem of controller synthesis for an environment specified as a hybrid automaton. We give a new solution for discrete controllers having only an imperfect information about the state of the system. In the process, we defined a new algorithm, based on the monotonicity of the controllable predecessors operator, for efficiently finding a controller and we show some promising applications on a classical problem :the universality test for finite automata.
Doctorat en sciences, Spécialisation Informatique
info:eu-repo/semantics/nonPublished
Mroueh, Dit Injibar Mohamed. "Classification évidentielle mono- et multi-label : application à la détection de maladies cardio-vasculaires." Thesis, Troyes, 2021. http://www.theses.fr/2021TROY0011.
Повний текст джерелаThis thesis focuses on the detection of cardiovascular diseases through the monitoring of physiological signals. The objective is to develop mono- and multi-label classification approaches, based on the theory of belief functions, to predict or diagnose a complication linked to one or more cardiovascular diseases. First, an approach providing parameter extraction and information modeling in an evidential framework is developed to predict atrial fibrillation, a cardiac arrhythmia. An extension of this approach uses a reject classification option and alternative information modeling. The thesis then broadens the field of application to cover several cardiovascular diseases at the same time. The problem is thus defined as a multi-label classification where the labels represent features of the diseases. A multi-label classification approach is developed in the evidential domain which makes use of correlations between diseases to increase diagnostic accuracy. Finally, a theoretical approach of multi-label classification, which takes advantage of the correlation between labels, has been proposed. This ensemble method allows for efficient multi-label classification. The proposed approaches are validated using a public medical database, MIMIC III, hosted on Physionet
Andersson, Martin, and Marcus Mazouch. "Binary classification for predicting propensity to buy flight tickets. : A study on whether binary classification can be used to predict Scandinavian Airlines customers’ propensity to buy a flight ticket within the next seven days." Thesis, Umeå universitet, Institutionen för matematik och matematisk statistik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-160855.
Повний текст джерелаEn kunds benägenhet att göra ett visst köp är ett allmänt undersökt område som applicerats i flera olika branscher. I den här studien visas det att statistiska binära klassificeringsmodeller kan användas för att prediktera Scandinavian Airlines kunders benägenhet att köpa en resa de kommande sju dagarna. En jämförelse är presenterad mellan logistisk regression och stödvektormaskin och logistisk regression med reducerat antal parametrar väljs som den slutgiltiga modellen tack vare sin enkelhet och träffsäkerhet. De förklarande variablerna är uteslutande bokningshistorik medan kundens demografi och sökdata visas vara insignifikant.
Guazzelli, Alex. "Aprendizagem em sistemas hibridos." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 1994. http://hdl.handle.net/10183/25776.
Повний текст джерелаThis dissertation presents two new connectionist models based on the adaptive resonance theory (ART): Simplified Fuzzy ARTMAP and Semantic ART (SMART). The modeling, adaptation, implementation and validation of these models are described, in their association to HYCONES, a hybrid connectionist expert system to solve classification problems. HYCONES integrates the knowledge representation mechanism of frames with neural networks, incorporating the inherent qualities of the two paradigms. While the frames mechanism provides flexible constructs for modeling the domain knowledge, neural networks, implemented in HYCONES' first version by the combinatorial neuron model (CNM), provide the means for automatic knowledge acquisition from a case database, enabling, as well, the implementation of deductive and inductive learning. The Adaptive Resonance Theory (ART) deals with a system involving selfstabilizing input patterns into recognition categories, while maintaining a balance between the properties of plasticity and stability. ART includes a series of different connectionist models: Fuzzy ARTMAP, Fuzzy ART, ART 1, ART 2, and ART 3. Among them, the Fuzzy ARTMAP one stands out for being capable of learning analogical patterns, using two basic ART modules. The Simplified Fuzzy ARTMAP model is a simplification of the Fuzzy ARTMAP neural network. Constrating the first model, the new one is capable of learning analogical patterns using only one ART module. This module is responsible for the categorization of the input patterns. However, it has one more layer, which is responsible for receiving and propagating the target patterns through the network. The presence of a single ART module does not hamper the Simplified Fuzzy ARTMAP model. The same performance levels are attained when the latter one runs without the second ART module. This is certified by the match-tracking strategy, that conjointly maximizes generalization and minimizes predictive error. Two medical domains were chosen to validate HYCONES performance: congenital heart diseases (CHD) and renal syndromes. To build up the CHD case base, 66 medical records were extracted from the cardiac surgery database of the Institute of Cardiology RS (ICFUC-RS). These records cover the period from January 1986 to December 1990 and describe 22 cases of Atrial Septal Defect (ASD), 29 of Ventriculal Septal Defect (VSD), and 15 of Atrial- Ventricular Septa! Defect (AVSD), the three most frequent congenital heart diseases. For validation purposes, 33 additional cases, from the same database and period mentioned above, were also extracted. From these cases, 13 report ASD, 10 VSD and 10 AVSD. To build the renal syndromes case base, 381 medical records from the database of the Escola Paulista de Medicina were analyzed and 58 evidences, covering the patients' clinical history and physical examination data, were semiautomatically extracted. From the total number of selected cases, 136 exhibit Uremia, 85 Nephritis, 100 Hypertension, and 60 Calculosis. From the 381 cases analyzed, 245 were randomically chosen to build the training set, while the remaining ones were used to build the testing set. To validate HYCONES II, 46 versions of the hybrid knowledge base (HKB) with congenital heart diseases were built; for the renal domain, another set of 46 HKB versions were constructed. For both medical domains, the HKBs were automatically generated from the training databases. From these 46 versions, one operates with the CNM model and the other 45 deals with two ART models. These ART versions are divided in three groups: 15 versions were built using the Simplified Fuzzy ARTMAP model; 15 used the Simplified Fuzzy ARTMAP model without the normalization of the input patterns, and 15 used the Semantic ART model. HYCONES II - Simplified Fuzzy ARTMAP and HYCONES - CNM performed similarly for the CH D domain. The first one pointed out correctly to 29 of the 33 testing cases (87,9%), while the second one indicated correctly 31 of the same cases (93,9%). In the renal syndromes domain, however, the performance of HYCONES II - Simplified Fuzzy ARTMAP was superior to the one exhibited by CNM (p < 0,05). Both versions pointed out correctly, respectively, 108 (85%) and 95 (74.8%) diagnoses of the 127 testing cases presented to the system. HYCONES II - Simplified Fuzzy ARTMAP, therefore, displayed a satisfactory performance. However, the semantic contents of the neural nets it generated were completely different from the ones stemming from the CNM version. The networks that pointed out the final diagnosis in HYCONES - CNM were very similar to the knowledge graphs elicited from experts in congenital heart diseases. On the other hand, the networks activated in HYCONES II - Simplified Fuzzy ARTMAP operated with far more evidences than the CNM version. Besides this quantitative difference, there was a striking qualitative discrepancy among these two models. The Simplified Fuzzy ARTMAP version, even though pointing out to the correct diagnoses, used evidences that represented the complementary coding of the input pattern. This coding, inherent to the Simplified Fuzzy ARTMAP model, duplicates the input pattern, generating a new one depicting the evidence observed (on-cell) and, at the same time, the absent evidence, in relation to the total evidence employed to represent the input cases (off-cell). This coding shuts out the HYCONES explanation mechanism, since medical doctors usually reach a diagnostic conclusion rather from a set of observed evidences than from their absence. The next step taken was to improve the semantic contents of the Simplified Fuzzy ARTMAP model. To achieve this, the complement coding process was removed and the modified model was, then, revalidated, through the same testing sets as above described. In the CHD domain, the performance of HYCONES II - Simplified Fuzzy ARTMAP, without complementary coding, proved to be inferior to the one presented by CNM (p < 0,05). The first model singled out correctly 25 out of the 33 testing cases (75,8%), while the second one singled out correctly 31 out of the same 33 cases (93,9%). In the renal syndromes domain, the performances of HYCONES II - Simplified Fuzzy ARTMAP, without complementary coding, and HYCONES - CNM were similar. The first pointed out correctly to 98 of the 127 testing cases (77,2%), while the second one pointed out correctly to 95 of the same cases (74.8%). However, the recognition categories formed by this modified Simplified Fuzzy ARTMAP still presented quantitative and qualitative differences in their contents, when compared to the networks activated by CNM and to the knowledge graphs elicited from experts. This discrepancy, although smaller than the one observed in the original Fuzzy ARTMAP model, still restrained HYCONES explanation mechanism. The Semantic ART model (SMART) was, then, proposed. Its goal was to improve the semantic contents of ART recognition categories. To build this new model, the Simplified Fuzzy ARTMAP archictecture was preserved, while its learning algorithm was replaced by the CNM inductive learning mechanism (the punishments and rewards algorithm, associated with the pruning and normalization mechanisms). A new validation phase was, then, performed over the same testing sets. For the CHD domain, the perfomance comparison among SMART, Simplified Fuzzy ARTMAP, and CNM versions showed similar results. The first and the second versions pointed out correctly to 29 of the 33 testing cases (87,9%), while the third one singled out correctly 31 of the same testing cases (93,9%). For the renal syndromes domain, the performance of HYCONES II - SMART was superior to the one presented by the CNM version (p < 0,05), and equal to the performance presented by the Simplified Fuzzy ARTMAP version. SMART and Simplified Fuzzy ARTMAP singled out correctly 108 of the 127 testing cases (85%), while the CNM version pointed out correctly 95 of the same 127 testing cases (74.8%). Finally, it was observed that the neural networks generated by HYCONES II - SMART had a similar content to the networks generated by CNM and to the knowledge graphs elicited from multiple experts. The main contributions of this dissertation are: the design, implementation and validation of the Simplified Fuzzy ARTMAP and SMART models. The latter one, however, stands out for its learning mechanism, which provides a higher semantic value to the recognition categories, when compared to the categories formed by conventional ART models. This important enhancement is obtained by incorporating specificity and relevance concepts to ART's dynamics. This dissertation, however, represents not only the design and validation of two new connectionist models, but also, the enrichment of HYCONES. This is obtained through the continuation of a previous MSc dissertation, under the same supervision supervision. From the present work, therefore, it is given to the knowledge engineering, the choice among three different neural networks: CNM, Semantic ART and Simplified Fuzzy ARTMAP, all of which, display good performance. Indeed, the first and second models, in contrast to the third, support the context in a semantic way.