Dissertations / Theses on the topic 'Knowledge engineering and artificial intelligence'

To see the other types of publications on this topic, follow the link: Knowledge engineering and artificial intelligence.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Knowledge engineering and artificial intelligence.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Malmborn, Albin, and Linus Sjöberg. "Implementing Artificial intelligence." Thesis, Malmö universitet, Fakulteten för teknik och samhälle (TS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-20942.

Full text
Abstract:
Den här uppsatsen har som syfte att undersöka huruvida det är möjligt att ta fram riktlinjer för vad privata verksamheter behöver ta i beaktande inför en planerad implementering av artificiell intelligens. Studien kommer belysa faktorer som hjälper företag att förstå vad som krävs inför en sådan omställning, men även de hinder som måste övervinnas för att lyckas. Studiens datainsamling har genomförts med två metoder, först en litteraturstudie sedan kvalitativa, semistrukturerade forskningsintervjuer. Dessa har sedan analyserats med vars en analysmetod som kompletterar varandra och därefter tolkats för att se mönster som kan besvara studiens frågeställning: Vad måste svenska organisationer inom den privata sektorn beakta för att lyckas implementera Artificiell intelligens i sin verksamhet? Resultatet har tagits fram genom att jämföra vetenskapliga texter och intervjuer, för att undersöka om den akademiska och praktiska synen skiljer sig åt. Studien resulterade i åtta faktorer som företag borde ta i beaktning inför en implementering av artificiell intelligens. Författarna hoppas att med den här studien kunna främja svensk utveckling inom artificiell intelligens och på så vis generera ett större nationellt mervärde och en starkare konkurrenskraft internationell.
The purpose of this paper is to investigate the possibilities to develop guidelines for businesses to take into account before an implementation of artificial intelligence. The study will highlight different factors that will help companies to understand what is required to make this kind of digital transition, it will also highlight the obstacles companies have to overcome in order to succeed. The data collection was conducted in two parts, first a literature study and then qualitative, semi-structured interviews. These were analyzed with their own analysis which supplement each other, and interpreted to identify patterns that could answer the study's main question: What must Swedish organizations in the private sector consider in order to successfully implement Artificial Intelligence in their operations?The result of the study has been produced by comparing scientific texts and interviews, to investigate whether the academic and practical views differ. The study resulted in eight factors that companies should consider before implementing artificial intelligence. The authors hope that the study will promote Swedish development in artificial intelligence and thus generate a greater national value and international competitiveness.
APA, Harvard, Vancouver, ISO, and other styles
2

Collis, Jaron Clements. "An application of artificial intelligence to quantitative problem solving in engineering." Thesis, Queen's University Belfast, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.361311.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Huang, Zan, Hsinchun Chen, Alan Yip, Gavin Ng, Fei Guo, Zhi-Kai Chen, and Mihail C. Roco. "Longitudinal patent analysis for nanoscale science and engineering: Country, institution and technology field." Kluwer, 2003. http://hdl.handle.net/10150/105834.

Full text
Abstract:
Artificial Intelligence Lab, Department of MIS, University of Arizona
Nanoscale science and engineering (NSE) and related areas have seen rapid growth in recent years. The speed and scope of development in the field have made it essential for researchers to be informed on the progress across different laboratories, companies, industries and countries. In this project, we experimented with several analysis and visualization techniques on NSE-related United States patent documents to support various knowledge tasks. This paper presents results on the basic analysis of nanotechnology patents between 1976 and 2002, content map analysis and citation network analysis. The data have been obtained on individual countries, institutions and technology fields. The top 10 countries with the largest number of nanotechnology patents are the United States, Japan, France, the United Kingdom, Taiwan, Korea, the Netherlands, Switzerland, Italy and Australia. The fastest growth in the last 5 years has been in chemical and pharmaceutical fields, followed by semiconductor devices. The results demonstrate potential of information-based discovery and visualization technologies to capture knowledge regarding nanotechnology performance, transfer of knowledge and trends of development through analyzing the patent documents.
APA, Harvard, Vancouver, ISO, and other styles
4

Farzanegan, Akbar. "Knowledge-based optimization of mineral grinding circuits." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape11/PQDD_0027/NQ50158.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Ajit, Suraj. "Capture and maintenance of constraints in engineering design." Thesis, Available from the University of Aberdeen Library and Historic Collections Digital Resources. Restricted access until May 30, 2112. Online version available for University member only until May, 30 2014, 2009. http://digitool.abdn.ac.uk:80/webclient/DeliveryManager?application=DIGITOOL-3&owner=resourcediscovery&custom_att_2=simple_viewer&pid=25928.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Hu, Jhyfang. "Towards a knowledge-based design support environment for design automation and performance evaluation." Diss., The University of Arizona, 1989. http://hdl.handle.net/10150/184804.

Full text
Abstract:
The increasing complexity of systems has made the design task extremely difficult without the help of an expert's knowledge. The major goal of this dissertation is to develop an intelligent software shell, termed the Knowledge-Based Design Support Environment (KBDSE), to facilitate multi-level system design and performance evaluation. KBDSE employs the technique, termed Knowledge Acquisition based on Representation (KAR), for acquiring design knowledge. With KAR, the acquired knowledge is automatically verified and transformed into a hierarchical, entity-based representation scheme, called the Frame and Rule Associated System Entity Structure (FRASES). To increase the efficiency of design reasoning, a Weight-Oriented FRASES Inference Engine (WOFIE) was developed. WOFIE supports different design methodologies (i.e., top-down, bottom-up, and hybrid) and derives all possible alternative design models parallelly. By appropriately setting up the priority of a specialization node, WOFIE is capable of emulating the design reasoning process conducted by a human expert. Design verification is accomplished by computer simulation. To facilitate performance analysis, experimental frames reflecting design objectives are automatically constructed. This automation allows the design model to be verified under various simulation circumstances without wasting labor in programming math-intensive models. Finally, the best design model is recommended by applying Multi-Criteria Decision Making (MCDM) methods on simulation results. Generally speaking, KBDSE offers designers of complex systems a mixed-level design and performance evaluation; knowledge-based design synthesis; lower cost and faster simulation; and multi-criteria design analysis. As with most expert systems, the goal of KBDSE is not to replace the human designers but to serve as an intelligent tool to increase design productivity.
APA, Harvard, Vancouver, ISO, and other styles
7

Burge, Janet E. "Software Engineering Using design RATionale." Link to electronic thesis, 2005. http://www.wpi.edu/Pubs/ETD/Available/etd-050205-085625/.

Full text
Abstract:
Dissertation (Ph.D.) -- Worcester Polytechnic Institute.
Keywords: software engineering; inference; knowledge representation; software maintenance; design rationale. Includes bibliographical references (p. 202-211).
APA, Harvard, Vancouver, ISO, and other styles
8

Tremblay, Luc 1962. "A dimensional analysis system for knowledge-aided design in electromagnetics." Thesis, McGill University, 1995. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=23758.

Full text
Abstract:
This thesis considers the dimensional analysis theory in engineering. A Knowledge-Aided Design Tool is presented which permits the solution of many aspects of Dimensional Analysis for electromagnetics. The KAD Tool was coded in the language Lisp with Allegro Common Lisp in a Microsoft-Windows environment on a PC with a 486 microprocessor. It represents 10 196 lines of code. The mathematical functions are supported by the mathematical libraries of the software MAPLE. A menu with nine choices corresponding to nine functionalities of Dimensional Analysis is offered to the user.
APA, Harvard, Vancouver, ISO, and other styles
9

Moafipoor, Shahram. "Intelligent Personal Navigator Supported by Knowledge-Based Systems for Estimating Dead Reckoning Navigation Parameters." The Ohio State University, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=osu1262043297.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Carbogim, Daniela Vasconcelos. "Dynamics in formal argumentation." Thesis, University of Edinburgh, 2000. http://hdl.handle.net/1842/591.

Full text
Abstract:
In this thesis we are concerned with the role of formal argumentation in artificial intelligence, in particular in the field of knowledge engineering. The intuition behind argumentation is that one can reason with imperfect information by constructing and weighing up arguements intended to give support in favour or against alternative conclusions. In dynamic argumentation, such arguements may be revised and strengthened in order yo increase to decrease the acceptability of controversial positions. This thesis studies the theory, architecture, development and applications of formal arguementation systems from the procedural perspective of actually generating argumentation processes. First, the types of problems that can be tackled via the argumentation paradigm in knowledge engineering are characterised. Second, an abstract formal framework are built from an underlying set of axioms, represented here as executatble logic programs. Finally an architecture for dynamic arguementation systems is defined, and domain-specific applications are presented within different domaind, thus grounding problems with very distinctive characteristics into a similar source in argumentation. The methods and definitions desribed in this thesis have been assessed on various bases, including the reconstruction of informal arguements and of arguments captured by existing formalisms, the relation between our framework and these formalisms, and examples of dynamic argumentation applications in the safety-engineering and multi-agent domains.
APA, Harvard, Vancouver, ISO, and other styles
11

Silvestre, André Meyer. "Raciocínio probabilístico aplicado ao diagnóstico de insuficiência cardíaca congestiva (ICC)." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2003. http://hdl.handle.net/10183/12679.

Full text
Abstract:
As Redes Bayesianas constituem um modelo computacional adequado para a realização de inferências probabilísticas em domínios que envolvem a incerteza. O raciocínio diagnóstico médico pode ser caracterizado como um ato de inferência probabilística em um domínio incerto, onde a elaboração de hipóteses diagnósticas é representada pela estratificação de doenças em função das probabilidades a elas associadas. A presente dissertação faz uma pesquisa sobre a metodologia para construção/validação de redes bayesianas voltadas à área médica, e utiliza estes conhecimentos para o desenvolvimento de uma rede probabilística para o auxílio diagnóstico da Insuficiência Cardíaca (IC). Esta rede bayesiana, implementada como parte do sistema SEAMED/AMPLIA, teria o papel de alerta para o diagnóstico e tratamento precoce da IC, o que proporcionaria uma maior agilidade e eficiência no atendimento de pacientes portadores desta patologia.
Bayesian networks (BN) constitute an adequate computational model to make probabilistic inference in domains that involve uncertainty. Medical diagnostic reasoning may be characterized as an act of probabilistic inference in an uncertain domain, where diagnostic hypotheses elaboration is represented by the stratification of diseases according to the related probabilities. The present dissertation researches the methodology used in the construction/validation of Bayesian Networks related to the medical field, and makes use of this knowledge for the development of a probabilistic network to aid in the diagnosis of Heart Failure (HF). This BN, implemented as part of the SEAMED/AMPLIA System, would engage in the role of alerting for early diagnosis and treatment of HF, which could provide faster and more efficient healthcare of patients carrying this pathology.
APA, Harvard, Vancouver, ISO, and other styles
12

Boggan, Chad M. "US Knowledge Worker Augmentation versus Replacement with AI Software| Impact on Organizational Returns, Innovation, and Resistance." Thesis, The George Washington University, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10979025.

Full text
Abstract:

This praxis studies the effects on organizations of replacing US knowledge workers with artificial intelligence software (automation) and enhancing US knowledge workers with artificial intelligence software (augmentation). The effects on organizational innovation, resistance, and return on investment (ROI) are studied.

The main purpose of this study is to confirm the relationships between automation/augmentation, innovation, resistance, and ROI. This study is also meant to aid researchers, policy makers, executives, and others that have influence over automation and augmentation decisions. The implications of these decisions will reverberate through the multi-billion-dollar US job market in the coming years.

Quantitative methods were used to look at researched examples of both automation and augmentation. Data from 1993 to 2018 was gathered and assessed on innovation, resistance, and ROI from a number of different industries and a number of different types of firms based on size and ownership structure (public or private). Statistical methods were then used to compare the effects of automation and augmentation on organizations.

Research data was gathered to study the relationship between innovation and ROI, as well as the relationship between resistance and ROI. These relationships were used to combine ROI, innovation, and resistance using Monte Carlo simulations. This combination of ROI, innovation, and resistance was then used to compare the combined effects of automation and augmentation on organizations over time.

APA, Harvard, Vancouver, ISO, and other styles
13

Motaabbed, Asghar B. 1959. "A knowledge acquisition scheme for fault diagnosis in complex manufacturing processes." Thesis, The University of Arizona, 1992. http://hdl.handle.net/10150/278266.

Full text
Abstract:
This thesis introduces the problem of knowledge acquisition in developing a Trouble Shooting Guide (TSG) for equipment used in integrated circuit manufacturing. TSG is considered as a first step in developing an Expert Diagnostic System (EDS). The research is focused on the acquisition and refinement of actual knowledge from the manufacturing domain, and a Hierarchical Data Collection (HDC) system is introduced to solve the problem of bottleneck in developing EDS. An integrated circuit manufacturing environment is introduced, and issues relating to the collection and assessment of knowledge concerning the performance of the machine park are discussed. Raw data about equipment used in manufacturing environment is studied and results are discussed. A systematic classification of symptoms, failures, and repair activities is presented.
APA, Harvard, Vancouver, ISO, and other styles
14

Kairouz, Joseph. "Patient data management system medical knowledge-base evaluation." Thesis, McGill University, 1996. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=24060.

Full text
Abstract:
The purpose of this thesis is to evaluate the medical data management expert system at the Pediatric Intensive Care Unit of the Montreal Children's Hospital. The objective of this study is to provide a systematic method to evaluate and, progressively improve the knowledge embedded in the medical expert system.
Following a literature survey on evaluation techniques and architecture of existing expert systems, an overview of the Patient Data Management System hardware and software components is presented. The design of the Expert Monitoring System is elaborated. Following its installation in the intensive Care Unit, the performance of the Expert Monitoring System is evaluated, operating on real vital sign data and corrections were formulated. A progressive evaluation technique, new methodology for evaluating an expert system knowledge-base is proposed for subsequent corrections and evaluations of the Expert Monitoring System.
APA, Harvard, Vancouver, ISO, and other styles
15

Thomas, Christopher J. "Knowledge Acquisition in a System." Wright State University / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=wright1357753287.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Gründer, Willi, and Denis Polyakov. "Konstruktionslösungen mit Hilfe von Künstlicher Intelligenz." Thelem Universitätsverlag & Buchhandlung GmbH & Co. KG, 2019. https://tud.qucosa.de/id/qucosa%3A36932.

Full text
Abstract:
Im Rahmen des Artikels wird ein Ansatz für einen 'intellektuellen Konstruktionsassistenten' auf der Basis digitalisierter Erfahrung vorgeschlagen. Diese an die analytischen und numerischen Verfahren anknüpfenden Assistenten werden unter Verwendung von Methoden der Künstlichen Intelligenz erzeugt. Sie sollen bereits bekannte Wissenselemente und Erfahrungen aufnehmen und durch eine fortgesetzte Spiegelung an der Realität fortschreiben, ohne dass eine aufwendige Algorithmenbildung und zeitraubende Numerik den Transfer neuer, oftmals inhärenter Erkenntnisse in die tägliche Praxis und damit das Qualitätsmanagement behindert. Wissensunterschiede zwischen Abteilungen können auf diese Weise schnell beseitigt und Bildungsunterschiede zwischen Mitarbeitern ausgeglichen werden. Andererseits kann hiermit in den Unternehmen aber auch die Abbildung besonderer Stärken durch einen automatischen Abgleich gleichgelagerter Konstruktionen vorangetrieben werden. [... aus der Einleitung]
APA, Harvard, Vancouver, ISO, and other styles
17

Amaral, Janete Pereira do. "Um estudo sobre objetos com comportamento inteligente." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 1993. http://hdl.handle.net/10183/25458.

Full text
Abstract:
Diversos estudos têm sido realizados com o objetivo de definir estruturas para construção de ambientes de desenvolvimento de software. Alguns desses estudos indicam a necessidade de prover inteligência a tais ambientes, para que estes, efetivamente, coordenem e auxiliem o processo de desenvolvimento de software. O paradigma da orientação a objetos (POO) vem sendo utilizado na implementação de sistemas inteligentes, com diferentes enfoques. O POO tem sido experimentado, também, como estrutura para construção de ambientes. A abordagem da construção de sistemas, na qual a inteligência se encontra distribuída, como proposto por Hewitt, Minsky e Lieberman, suscita a idéia de modelar objetos que atuem como solucionadores de problemas, trabalhando cooperativamente para atingir os objetivos do sistema, e experimentar essa abordagem na construção de ambientes inteligentes. Nesta dissertação, é apresentado um estudo sobre a utilização do POO na implementação de sistemas inteligentes, e proposta uma extensão ao conceito de objeto. Essa extensão visa permitir flexibilidade no seu comportamento, autonomia nas suas ações, aquisição de novos conhecimentos e interação com o ambiente externo. A existência de objetos com tais características permite a construção de sistemas inteligentes, modularizados e evolutivos, facilitando, assim, seu projeto, implementação e manutenção. Visando esclarecer os termos utilizados no decorrer desta dissertação, são discutidos os conceitos básicos do POO e suas principais extensões. São apresentadas algumas abordagens sobre inteligência e comportamento inteligente, destacando-se a importância de conhecimento, aprendizado e comportamento flexível. Observa-se que este último decorre da aquisição de novos conhecimentos e da análise das condições do ambiente. Buscando fornecer embasamento para análise das características representacionais do POO, são apresentados os principais esquemas de representação de conhecimento e algumas estratégias para resolução de problemas, utilizados em sistemas inteligentes. E analisado o uso do POO como esquema de representação de conhecimento, destacando-se suas vantagens e deficiências. É sintetizado um levantamento de propostas de utilização do POO na implementação de sistemas inteligentes, realizado com o objetivo de identificar os mecanismos empregados na construção desses sistemas. Observa-se a tendência em apoiar a abordagem da inteligência distribuída, utilizando-se a estruturação do conhecimento propiciado pelo POO e características positivas de outros paradigmas. Propõe-se um modelo de objetos com comportamento inteligente. Nesse modelo, além dos aspectos declarativos e procedimentais do conhecimento, representados através de variáveis de instância e de métodos, são encapsulados mecanismos para prover autonomia e comportamento flexível, permitir a aquisição de novos conhecimentos, e propiciar a comunicação com usuários. Para prover autonomia, foi projetado um gerenciador de mensagens, que recebe solicitações enviadas ao objeto, colocando-as numa fila e atendendo-as segundo seu conhecimento e análise das condições do ambiente. Utilizando-se recursos da programação em lógica, são introduzidas facilidades para flexibilização do comportamento, através de regras comportamentais em encadeamento regressivo. A aquisição de novos conhecimentos é obtida através da inclusão/retirada de fatos, de procedimentos e de regras comportamentais na base de conhecimento do objeto. Para fornecer auxílio e relato de suas atividades, os objetos exibem o status da ativação de suas regras comportamentais, e listas das solicitações atendidas e das mantidas em sua fila de mensagens. Para experimentar o modelo proposto, é implementado um protótipo de um assistente inteligente para as atividades do processo de desenvolvimento de software. Sua implementação utiliza a linguagem Smalltalk/V com recursos da programação em lógica, integrados através de Prolog/V. A experiência obtida na utilização desse modelo mostrou a viabilidade da inclusão de características complementares ao modelo de objetos do POO, e a simplicidade de sua implementação, utilizando-se recursos multiparadigmáticos. Esse modelo constitui, assim, uma alternativa viável para construção de ambientes inteligentes.
Aiming at defining structures for Software Engineering Environments (SEE) much research has been accomplished. Some of this research results have pointed out the need to provide intelligence to coordinate and assist effectively the software development process. The object-oriented paradigm (OOP) has been applied to implement intelligent systems with several approaches. The OOP as SEE structure has been experimented as well. The system construction approach in which the intelligence is distributed among its elements, proposed by Hewitt, Minsky and Lieberman, elicits the idea of modelling objects that act as problem-solvers, working cooperatively to reach the system objectives, and to experiment this approach in the construction of intelligent environments. In this dissertation, a study of the OOP use in the implementation of intelligent systems is presented. An extension to the object concept is proposed to allow objects to exhibit a flexible behavior, to have autonomy in their tasks fulfillment, to acquire new knowledge, and to interact with the external environment. The existence of objects with this ability, enables the construction of modulated and evolutionary intelligent systems, making its design, implementation and maintenance easier. The OOP basic concepts and main extensions are discussed to elucidate the concepts that will be used throughout this dissertation. Some intelligence and intelligent behavior approaches are presented, emphasizing knowledge, learning and flexible behavior. This flexible behavior comes from new knowledge acquisition and from the analysis of environment conditions. The main knowledge representation schemes and several problem solving strategies used in intelligent systems are presented to provide background for representational characteristics analysis of the OOP. The OOP used as a knowledge representation scheme is analyzed and emphasized its advantages and shortcomings. In order to identify mechanisms engaged in the implementation of intelligent systems, a survey of proposals of the OOP used in that systems is synthesized. In that survey the emphasis to support the distributed intelligence approach through the use of the knowledge representation model provided by OOP and positive characteristics of other paradigms is observed. An object model with intelligent behavior is proposed, in which, besides the declarative and procedural aspects of knowledge represented through instance variables and methods, mechanisms are encapsulated to provide autonomy and flexible behavior, to allow new knowledge acquisition, and to promote communications with users. To provide autonomy a message manager which receives requests from other objects was developed. The message manager puts messages in a queue and dispatches them according to its knowledge and the analysis of environment conditions. Using programming in logic resources, facilities are introduced to get behavior flexibility through behavioral rules in backward chaining. Knowledge is acquired through facts, procedures, and behavioral rules asserted/retracted in the object's knowledge-base. To provide assistance and report on their activities, the objects exhibit the status of their behavioral rules firing, and lists of granted requests as well as the ones kept in its message queue. To explore the proposed model properties, one intelligent assistant prototype to support the activities of the system development process was implemented. For its implementation, the Smalltalk/V language with programming in logic resources integrated by Prolog/V was used. The experience acquired in using this model, indicated the feasibility of the inclusion of additional characteristics to the OOP model, and the clearness of its implementation using multiparadigm resources. Therefore, this model is a viable alternative to the construction of intelligent environments.
APA, Harvard, Vancouver, ISO, and other styles
18

Nannetti, Federica. "Expert Systems in Maintenance Diagnostic." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2017.

Find full text
Abstract:
Lo scopo di questo lavoro è introdurre il lettore all’utilizzo dei sistemi esperti nell’ambito manutentivo, specialmente riguardo alla diagnostica. La struttura della dissertazione è articolata in tre parti principali. La prima consiste in una panoramica sulla manutenzione e sui metodi più comuni che ne fanno parte, focalizzandosi specialmente su quelli che sono più attinenti ai sistemi esperti. Nella seconda parte il lettore può trovare le tipologie, le caratteristiche rilevanti e la storia dei sistemi esperti. L’ultima parte della tesi è dedicata allo sviluppo del caso di studio, introdotto da una descrizione del software utilizzato per la sua progettazione (VisiRule). A conclusione del lavoro sono presentate delle considerazioni sugli aspetti positivi dell’utilizzo di sistemi esperti nell’ambito della diagnostica manutentiva.
APA, Harvard, Vancouver, ISO, and other styles
19

Aschinger, Markus Wolfgang. "LoCo : a logic for configuration problems." Thesis, University of Oxford, 2014. http://ora.ox.ac.uk/objects/uuid:728d1918-e5f2-4c02-849a-115aecde856a.

Full text
Abstract:
This thesis deals with the problem of technical product configuration: Connect individual components conforming to a component catalogue in order to meet a given objective while respecting certain constraints. Solving such configuration problems is one of the major success stories of applied AI research: In industrial environments they support the configuration of complex products and, compared to manual processes, help to reduce error rates and increase throughput. Practical applications are nowadays ubiquitous and range from configurable cars to the configuration of telephone communication switching units. In the classical definition of a configuration problem the number of components to be used is fixed while in practice, however, the number of components needed is often not easily stated beforehand. Existing knowledge representation (KR) formalisms expressive enough to deal with this dynamic aspect of configuration require that explicit bounds on all generated components are given as well as extensive knowledge about the underlying solving algorithms. To date there is still a lack of high-level KR tools being able to cope with these demands. In this work we present LoCo, a fragment of classical first order logic that has been carefully tailored for expressing technical product configuration problems. The core feature of LoCo is that the number of components used in configurations does not have to be finitely bounded explicitly, but instead is bounded implicitly through the axioms. We identify configurations with models of the logic; hence, configuration finding becomes model finding. LoCo serves as a high-level representation language which allows the modelling of general configuration problems in an intuitive and declarative way without the need of having knowledge about underlying solving algorithms; in fact, the specification gets automatically translated into low-level executable code. LoCo allows translations into different target languages. We present the language, related algorithms and complexity results as well as a prototypical implementation via answer-set programming.
APA, Harvard, Vancouver, ISO, and other styles
20

Clark, Matthew C. "Knowledge guided processing of magnetic resonance images of the brain [electronic resource] / by Matthew C. Clark." University of South Florida, 2001. http://purl.fcla.edu/fcla/etd/SFE0000001.

Full text
Abstract:
Includes vita.
Title from PDF of title page.
Document formatted into pages; contains 222 pages.
Includes bibliographical references.
Text (Electronic thesis) in PDF format.
ABSTRACT: This dissertation presents a knowledge-guided expert system that is capable of applying routinesfor multispectral analysis, (un)supervised clustering, and basic image processing to automatically detect and segment brain tissue abnormalities, and then label glioblastoma-multiforme brain tumors in magnetic resonance volumes of the human brain. The magnetic resonance images used here consist of three feature images (T1-weighted, proton density, T2-weighted) and the system is designed to be independent of a particular scanning protocol. Separate, but contiguous 2D slices in the transaxial plane form a brain volume. This allows complete tumor volumes to be measured and if repeat scans are taken over time, the system may be used to monitor tumor response to past treatments and aid in the planning of future treatment. Furthermore, once processing begins, the system is completely unsupervised, thus avoiding the problems of human variability found in supervised segmentation efforts.Each slice is initially segmented by an unsupervised fuzzy c-means algorithm. The segmented image, along with its respective cluster centers, is then analyzed by a rule-based expert system which iteratively locates tissues of interest based on the hierarchy of cluster centers in feature space. Model-based recognition techniques analyze tissues of interest by searching for expected characteristics and comparing those found with previously defined qualitative models. Normal/abnormal classification is performed through a default reasoning method: if a significant model deviation is found, the slice is considered abnormal. Otherwise, the slice is considered normal. Tumor segmentation in abnormal slices begins with multispectral histogram analysis and thresholding to separate suspected tumor from the rest of the intra-cranial region. The tumor is then refined with a variant of seed growing, followed by spatial component analysis and a final thresholding step to remove non-tumor pixels.The knowledge used in this system was extracted from general principles of magnetic resonance imaging, the distributions of individual voxels and cluster centers in feature space, and anatomical information. Knowledge is used both for single slice processing and information propagation between slices. A standard rule-based expert system shell (CLIPS) was modified to include the multispectral analysis, clustering, and image processing tools.A total of sixty-three volume data sets from eight patients and seventeen volunteers (four with and thirteen without gadolinium enhancement) were acquired from a single magnetic resonance imaging system with slightly varying scanning protocols were available for processing. All volumes were processed for normal/abnormal classification. Tumor segmentation was performed on the abnormal slices and the results were compared with a radiologist-labeled ground truth' tumor volume and tumor segmentations created by applying supervised k-nearest neighbors, a partially supervised variant of the fuzzy c-means clustering algorithm, and a commercially available seed growing package. The results of the developed automatic system generally correspond well to ground truth, both on a per slice basis and more importantly in tracking total tumor volume during treatment over time.
System requirements: World Wide Web browser and PDF reader.
Mode of access: World Wide Web.
APA, Harvard, Vancouver, ISO, and other styles
21

Lugo, Gustavo Alberto Giménez. "Um modelo de sistemas multiagentes para partilha de conhecimento utilizando redes sociais comunitárias." Universidade de São Paulo, 2004. http://www.teses.usp.br/teses/disponiveis/3/3141/tde-15112004-190053/.

Full text
Abstract:
Este trabalho apresenta um modelo para sistemas multiagentes constituídos por agentes de informação destinados a auxiliar comunidades humanas que partilham conhecimento. Tais agentes são cientes do entorno social dos usuários, pois possuem representações do conhecimento dos mesmos e também das redes sociais que os circundam, organizadas subjetivamente. Conceitos pertencentes às suas ontologias são estendidos com informação organizacional para representar de forma explícita as situações nas quais foram aprendidos e utilizados. Discute-se como tais agentes autônomos podem raciocinar sobre o uso e a privacidade de conceitos em termos de construções organizacionais, possibilitando raciocinar sobre papéis sociais em comunidades abertas na Internet.
This work presents a model for multi-agent systems for information agents supporting information-sharing communities. Such agents are socially aware in the sense that they have representations of the users' knowledge and also of their social networks, which are subjectively organized. Concepts in their ontologies are extended with organizational information to record explicitly the situations in which they were learned and used. It is discussed how such autonomous agents are allowed to reason about concept usage and privacy in terms of organizational constructs, paving the way to reason about social roles in open Internet communities.
APA, Harvard, Vancouver, ISO, and other styles
22

Hao, Shilun. "IDS---Intelligent Dougong System: A Knowledge-based and Graphical Simulation of Construction Processes of China’s Song-style Dougong System." The Ohio State University, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=osu1417702752.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Razo, Ruvalcaba Luis Alfonso. "Meta-analysis applied to Multi-agent Software Engineering." Thesis, Grenoble, 2012. http://www.theses.fr/2012GRENM107/document.

Full text
Abstract:
Considérant un point de vue général de cette thèse aborde le problème de trouver, à partir d'un ensemble de blocs de construction, un sous-ensemble qui procure une solution à un problème donné. Ceci est fait en tenant compte de la compatibilité de chacun des blocs de construction par rapport au problème et l'aptitude d'interaction entre ces parties pour former ensemble une solution. Dans la perspective notamment de la thèse sont les blocs de construction de méta-modèles et le problème donné est une description d'un problème peut être résolu en utilisant un logiciel et d'être résolu en utilisant un système multi-agents. Le noyau de la proposition de thèse est un processus qui analyse un problème donné et puis il proposé une solution possible basée sur système multi-agents pour ce problème. Il peut également indiquer que le problème ne peut être résolu par ce paradigme. Le processus adressée par la thèse consiste en les étapes principales suivantes: (1) A travers un processus de caractérisation on analyse la description du problème pour localiser le domaine de solutions, puis choisissez une liste de candidats des méta-modèles. (2) Les caractérisations de méta-modèles candidats sont prises, ils sont définis dans plusieurs domaines de la solution. On fait la chois parmi le domaine trouvé dans la étape précédant. (3) On crée un système multi-agents où chaque agent représente un candidat méta-modèle. Dans cette société les agents interagissent les uns avec les autres pour trouver un groupe de méta-modèles qui est adapté pour représenter une solution donnée. Les agents utilisent des critères appropriés pour chaque méta-modèle à représenter. Il évalue également la compatibilité des groupes créés pour résoudre le problème de décider le groupe final qui est la meilleure solution. Cette thèse se concentre sur la fourniture d'un processus et un outil prototype pour résoudre plutôt la dernière étape de la liste. Par conséquent, le chemin proposé a été créé à l'aide de plusieurs concepts de la méta-analyse, l'intelligence artificielle de coopération, de la cognition bayésienne, incertitude, la probabilité et statistique
From a general point of view this thesis addresses an automatic path to build a solution choosing a compatible set of building blocks to provide such a solution to solve a given problem. To create the solution it is considered the compatibility of each available building block with the problem and also the compatibility between each building block to be employed within a solution all together. In the particular perspective of this thesis the building blocks are meta-models and the given problem is a description of a problem that can be solved using software using a multi-agent system paradigm. The core of the thesis proposal is the creation of a process based on a multi-agent system itself. Such a process analyzes the given problem and the available meta-models then it matches both and thus it suggests one possible solution (based on meta-models) for the problem. Nevertheless if no solution is found it also indicates that the problem can not be solved through this paradigm using the available meta-models. The process addressed by the thesis consists of the following main steps: (1) Through a process of characterization the problem description is analyzed in order to locate the solution domain and therefore employ it to choose a list of most domain compatible meta-models as candidates. (2) There are required also meta-model characterization that evaluate each meta-model performance within each considered domain of solution. (3) The matching step is built over a multi-agent system where each agent represents a candidate meta-model. Within this multi-agent system each agent interact with each other in order to find a group of suitable meta-models to represent a solution. Each agent use as criteria the compatibility between their represented candidate meta-model with the other represented meta-models. When a group is found the overall compatibility with the given problem is evaluated. Finally each agent has a solution group. Then these groups are compared between them in order to find the most suitable to solve the problem and then to decide the final group. This thesis focuses on providing a process and a prototype tool to solve the last step. Therefore the proposed path has been created using several concepts from meta-analysis, cooperative artificial intelligence, Bayesian cognition, uncertainty, probability and statistics
APA, Harvard, Vancouver, ISO, and other styles
24

Hung, Victor C. "Robust dialog management through a context-centric architecture." Doctoral diss., University of Central Florida, 2010. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/4639.

Full text
Abstract:
This dissertation presents and evaluates a method of managing spoken dialog interactions with a robust attention to fulfilling the human user's goals in the presence of speech recognition limitations. Assistive speech-based embodied conversation agents are computer-based entities that interact with humans to help accomplish a certain task or communicate information via spoken input and output. A challenging aspect of this task involves open dialog, where the user is free to converse in an unstructured manner. With this style of input, the machine's ability to communicate may be hindered by poor reception of utterances, caused by a user's inadequate command of a language and/or faults in the speech recognition facilities. Since a speech-based input is emphasized, this endeavor involves the fundamental issues associated with natural language processing, automatic speech recognition and dialog system design. Driven by Context-Based Reasoning, the presented dialog manager features a discourse model that implements mixed-initiative conversation with a focus on the user's assistive needs. The discourse behavior must maintain a sense of generality, where the assistive nature of the system remains constant regardless of its knowledge corpus. The dialog manager was encapsulated into a speech-based embodied conversation agent platform for prototyping and testing purposes. A battery of user trials was performed on this agent to evaluate its performance as a robust, domain-independent, speech-based interaction entity capable of satisfying the needs of its users.
ID: 029094516; System requirements: World Wide Web browser and PDF reader.; Mode of access: World Wide Web.; Thesis (Ph.D.)--University of Central Florida, 2010.; Includes bibliographical references (p. 280-301).
Ph.D.
Doctorate
Department of Electrical Engineering and Computer Science
Engineering and Computer Science
APA, Harvard, Vancouver, ISO, and other styles
25

EDIN, ANTON, and MARIAM QORBANZADA. "E-Learning as a tool to support the integration of machine learning in product development processes." Thesis, KTH, Skolan för industriell teknik och management (ITM), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-279757.

Full text
Abstract:
This research is concerned with possible applications of e-Learning as an alternative to onsite training sessions when supporting the integration of machine learning into the product development process. Mainly, its aim was to study if e-learning approaches are viable for laying a foundation for making machine learning more accessible in integrated product development processes. This topic presents itself as interesting as advances in the general understanding of it enable better remote learning as well as general scalability of knowledge transfer. To achieve this two groups of employees belonging to the same corporate group but working in two very different geographical regions where asked to participate in a set of training session created by the authors. One group received the content via in-person workshops whereas the other was invited to a series of remote tele-conferences. After both groups had participated in the sessions, some member where asked to be interviewed. Additionally. The authors also arranged for interviews with some of the participants’ direct managers and project leaders to compare the participants’ responses with some stakeholders not participating in the workshops. A combination of a qualitative theoretical analysis together with the interview responses was used as the base for the presented results. Respondents indicated that they preferred the onsite training approach, however, further coding of interview responses showed that there was little difference in the participants ability to obtain knowledge. Interestingly, while results point towards e-learning as a technology with many benefits, it seems as though other shortcomings, mainly concerning the human interaction between learners, may hold back its full potential and thereby hinder its integration into product development processes.
Detta forskningsarbete fokuserar på tillämpningar av elektroniska utlärningsmetoder som alternativ till lokala lektioner vid integrering av maskininlärning i produktutvecklingsprocessen. Framförallt är syftet att undersöka om det går att använda elektroniska utlärningsmetoder för att göra maskininlärning mer tillgänglig i produktutvecklingsprocessen. Detta ämne presenterar sig som intressant då en djupare förståelse kring detta banar väg för att effektivisera lärande på distans samt skalbarheten av kunskapsspridning. För att uppnå detta bads två grupper av anställda hos samma företagsgrupp, men tillhörande olika geografiska områden att ta del i ett upplägg av lektioner som författarna hade tagit fram. En grupp fick ta del av materialet genom seminarier, medan den andra bjöds in till att delta i en serie tele-lektioner. När båda deltagargrupper hade genomgått lektionerna fick några deltagare förfrågningar om att bli intervjuade. Några av deltagarnas direkta chefer och projektledare intervjuades även för att kunna jämföra deltagarnas åsikter med icke-deltagande intressenter. En kombination av en kvalitativ teoretisk analys tillsammans med svaren från intervjuerna användes som bas för de presenterade resultaten. Svarande indikerade att de föredrog träningarna som hölls på plats, men vidare kodning av intervjusvaren visade på undervisningsmetoden inte hade större påverkningar på deltagarnas förmåga att ta till sig materialet. Trots att resultatet pekar på att elektroniskt lärande är en teknik med många fördelar verkar det som att brister i teknikens förmåga att integrera mänsklig interaktion hindrar den från att nå sitt fulla potential och därigenom även hindrar dess integration i produktutvecklingsprocessen.
APA, Harvard, Vancouver, ISO, and other styles
26

Tamaddon, Leila. "Artificiell intelligens eller intelligent läkekonst? : Om kropp, hälsa och ovisshet i digitaliseringens tidevarv." Thesis, Södertörns högskola, Centrum för praktisk kunskap, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:sh:diva-40741.

Full text
Abstract:
Denna essä syftar till att ur filosofiska och idéhistoriska perspektiv belysa utmaningar och möjligheter med artificiell intelligens (AI) och digitalisering inom hälso- och sjukvården, med fokus på läkekonst, kropp, hälsa och ovisshet. Essän undersöker hur automatisering och digitala vårdformer omformar läkekonstens grund, nämligen mötet mellan patienten och läkaren. Genom en fenomenologisk kritik av AI och teknikens väsen, belyses skillnaden mellan människan och maskinen och hur den levda erfarenheten är situerad, förkroppsligad, fylld av mening och delad med andra. Essän utforskar hur situationsunik kunskap som praktisk klokhet, fronesis, samt ett reflekterande förnuft, intellectus,kan hantera den ovisshet som är inbäddad i det allmänmedicinska mötet. Essän belyser även hur digitalisering och AI passar väl med pågående marknadsanpassning av sjukvården, där homo economicus och homo digitalis båda omformar kropp och hälsa till mätbara resurser och data. Avslutningsvis lyfts etiska dilemman kring AI och digitalisering, samt vikten av praktisk och existentiell kunskap som förutsättningar för utvecklandet och designen av en teknik som syftar främja det mänskligt goda.
This essay aims to illuminate challenges and opportunities with artificial intelligence (AI) and digitalization in health care, focusing on the art of medicine, body, health and uncertainty. The theoretical framework is mainly within the fields of phenomenology and philosophical hermeneutics. The essay explores how automatization and digital health care are transforming the essence of medicine: the patient – physician encounter. By a phenomenological critique of AI and the essence of technology, the essay highlights the difference between machines and humans and how lived experience is situated, embodied, filled with meaning and shared with others. The essay explores how situational knowledge such as practical wisdom, phronesis, and reflective understanding, intellectus, can deal with the uncertainty that is embedded in the medical encounter in primary health care. The essay also highlights how digitalization and AI fit well with current market adaptation of health care, where homo economicus and homo digitalis both transform body and health into measurable resources and data. Finally, ethical dilemmas of AI and digitalization are highlighted, as well as the importance of practical and existential knowledge as preconditions for the development and design of a technology that aims to promote the human good.
APA, Harvard, Vancouver, ISO, and other styles
27

Fuchs, Béatrice. "Représentation des connaissances pour le raisonnement à partir de cas : le système ROCADE." Saint-Etienne, 1997. http://www.theses.fr/1997STET4017.

Full text
Abstract:
PADIM a pour objectifs l'aide à la conception de systèmes de supervision et l'aide à l'opérateur dans sa tâche de supervision d'un système industriel complexe. Le système de supervision est un environnement informatique qui collecte les données et les rassemble pour illustrer la situation courante sur les écrans d'interface. L'opérateur doit choisir les tableaux de bord convenant le mieux pour gérer la situation en cours, et les faire évoluer en fonction de la situation. L'aide à la décision s'appuie sur le RAPC pour capitaliser et réutiliser l'expérience des opérateurs. La mise en oeuvre d'un système de RAPC dans le domaine de la supervision industrielle repose sur l'acquisition et la représentation de connaissances de différents types. Dans le but d'apporter un cadre précis pour le développement de systèmes d'intelligence artificielle s'appuyant sur le RAPC. La contribution de cette recherche est de deux ordres : - au niveau « connaissance » une modélisation du RAPC a été réalisée pour décrire précisément les fonctionnalités des systèmes de RAPC en terme de tâches. Il met en évidence les connaissances nécessaires aux taches de raisonnement, les connaissances produites, les modèles de connaissances utilisés, et les mécanismes d'inférence mis en oeuvre. Ce modèle sert de guide pour faciliter l'acquisition et la modélisation des connaissances, et est conçu pour être applicable dans de nombreux domaines d'application et pour des tâches cognitives variées. Le modèle permet de capturer aussi bien les fonctionnalités invariantes que spécifiques des systèmes de RAPC. Plusieurs exemples de systèmes de RAPC sont étudiés en utilisant le modèle de tâches et afin d'illustrer ses capacités. - au niveau « symbole », un environnement pour le développement de systèmes de RAPC a été réalisé. Le système ROCADE est un système de représentation de connaissances par objets écrit en objective C dans l'environnement Nextstep. ROCADE possède des capacités de raisonnement telles que l'héritage, l'appariement et la classification. Il permet de faciliter l'élaboration de connaissances en collectant des informations dans le système d'information environnant en particulier le système de supervision dans le cadre de l'application PADIM. L'objectif de ce travail reste l'intégration des deux aspects - aide à l'acquisition et à la modélisation et implantation dans un environnement de développement de systèmes de RAPC - afin de constituer un support pour le développement de systèmes d'intelligence artificielle utilisant le raisonnement à partir de cas. Une application complète d'aide à la conception en supervision industrielle avec le système designer a d'ores et déjà été réalisée
APA, Harvard, Vancouver, ISO, and other styles
28

Fernandez, Sanchez Javier. "Knowledge Discovery and Data Mining Using Demographic and Clinical Data to Diagnose Heart Disease." Thesis, KTH, Skolan för kemi, bioteknologi och hälsa (CBH), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-233978.

Full text
Abstract:
Cardiovascular disease (CVD) is the leading cause of morbidity, mortality, premature death and reduced quality of life for the citizens of the EU. It has been reported that CVD represents a major economic load on health care sys- tems in terms of hospitalizations, rehabilitation services, physician visits and medication. Data Mining techniques with clinical data has become an interesting tool to prevent, diagnose or treat CVD. In this thesis, Knowledge Dis- covery and Data Mining (KDD) was employed to analyse clinical and demographic data, which could be used to diagnose coronary artery disease (CAD). The exploratory data analysis (EDA) showed that female patients at an el- derly age with a higher level of cholesterol, maximum achieved heart rate and ST-depression are more prone to be diagnosed with heart disease. Furthermore, patients with atypical angina are more likely to be at an elderly age with a slightly higher level of cholesterol and maximum achieved heart rate than asymptotic chest pain patients. More- over, patients with exercise induced angina contained lower values of maximum achieved heart rate than those who do not experience it. We could verify that patients who experience exercise induced angina and asymptomatic chest pain are more likely to be diagnosed with heart disease. On the other hand, Logistic Regression, K-Nearest Neighbors, Support Vector Machines, Decision Tree, Bagging and Boosting methods were evaluated by adopting a stratified 10 fold cross-validation approach. The learning models provided an average of 78-83% F-score and a mean AUC of 85-88%. Among all the models, the highest score is given by Radial Basis Function Kernel Support Vector Machines (RBF-SVM), achieving 82.5% ± 4.7% of F-score and an AUC of 87.6% ± 5.8%. Our research con- firmed that data mining techniques can support physicians in their interpretations of heart disease diagnosis in addition to clinical and demographic characteristics of patients.
APA, Harvard, Vancouver, ISO, and other styles
29

Christophe, François. "Semantics and Knowledge Engineering for Requirements and Synthesis in Conceptual Design: Towards the Automation of Requirements Clarification and the Synthesis of Conceptual Design Solutions." Phd thesis, Ecole centrale de nantes - ECN, 2012. http://tel.archives-ouvertes.fr/tel-00977676.

Full text
Abstract:
This thesis suggests the use of tools from the disciplines of Computational Linguistics and Knowledge Representation with the idea that such tools would enable the partial automation of two processes of Conceptual Design: the analysis of Requirements and the synthesis of concepts of solution. The viewpoint on Conceptual Design developed in this research is based on the systematic methodologies developed in the literature. The evolution of these methodologies provided precise description of the tasks to be achieved by the designing team in order to achieve successful design. Therefore, the argument of this thesis is that it is possible to create computer models of some of these tasks in order to partially automate the refinement of the design problem and the exploration of the design space. In Requirements Engineering, the definition of requirements consists in identifying the needs of various stakeholders and formalizing it into design speciႡcations. During this task, designers face the problem of having to deal with individuals from different expertise, expressing their needs with different levels of clarity. This research tackles this issue with requirements expressed in natural language (in this case in English). The analysis of needs is realised from different linguistic levels: lexical, syntactic and semantic. The lexical level deals with the meaning of words of a language. Syntactic analysis provides the construction of the sentence in language, i.e. the grammar of a language. The semantic level aims at Ⴁnding about the specific meaning of words in the context of a sentence. This research makes extensive use of a semantic atlas based on the concept of clique from graph theory. Such concept enables the computation of distances between a word and its synonyms. Additionally, a methodology and a metric of similarity was defined for clarifying requirements at syntactic, lexical and semantic levels. This methodology integrates tools from research collaborators. In the synthesis process, a Knowledge Representation of the necessary concepts for enabling computers to create concepts of solution was developed. Such, concepts are: function, input/output Ⴂow, generic organs, behaviour, components. The semantic atlas is also used at that stage to enable a mapping between functions and their solutions. It works as the interface between the concepts of this Knowledge Representation.
APA, Harvard, Vancouver, ISO, and other styles
30

Manaf, Afwarman 1962. "Constraint-based software for broadband networks planning : a software framework for planning with the holistic approach." Monash University, Dept. of Electrical and Computer Systems Engineering, 2000. http://arrow.monash.edu.au/hdl/1959.1/7754.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Manaf, Afwarman 1962. "Constraint-based software for broadband networks planninga software framework for planning with the holistic approach /." Monash University, Dept. of Electrical and Computer Systems Engineering, 2000. http://arrow.monash.edu.au/hdl/1959.1/8163.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Schlobach, Klaus Stefan. "Knowledge discovery in hybrid knowledge representation systems." Thesis, King's College London (University of London), 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.272023.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Char, Kalyani Govinda. "Constructivist artificial intelligence with genetic programming." Thesis, University of Glasgow, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.265641.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Vaquero, Tiago Stegun. "itSIMPLE: ambiente integrado de modelagem e análise de domínios de planejamento automático." Universidade de São Paulo, 2007. http://www.teses.usp.br/teses/disponiveis/3/3152/tde-19072007-174135/.

Full text
Abstract:
O grande avanço das técnicas de Planejamento em Inteligência Artificial fez com que a Engenharia de Requisitos e a Engenharia do Conhecimento ganhassem extrema importância entre as disciplinas relacionadas a projeto de engenharia (Engineering Design). A especificação, modelagem e análise dos domínios de planejamento automático se tornam etapas fundamentais para melhor entender e classificar os domínios de planejamento, servindo também de guia na busca de soluções. Neste trabalho, é apresentada uma proposta de um ambiente integrado de modelagem e análise de domínios de planejamento, que leva em consideração o ciclo de vida de projeto, representado por uma ferramenta gráfica de modelagem que utiliza diferentes representações: a UML para modelar e analisar as características estáticas dos domínios; XML para armazenar, integrar, e exportar informação para outras linguagens (ex.: PDDL); as Redes de Petri para fazer a análise dinâmica; e a PDDL para testes com planejadores.
The great development in Artificial Intelligence Planning has emphasized the role of Requirements Engineering and Knowledge Engineering among the disciplines that contributes to Engineering Design. The modeling and specification of automated planning domains turn out to be fundamental tasks in order to understand and classify planning domains and guide the application of problem solving techniques. In this work, it is presented the proposed integrated environment for modeling and analyzing automated planning domains, which considered the life cycle of a project, represented by a tool that uses several language representations: UML to model and perform static analyses of planning environments; XML to hold, integrate, share and export information to other language representations (e.g. PDDL); Petri Nets, where dynamic analyses are made; and PDDL for testing models with planners.
APA, Harvard, Vancouver, ISO, and other styles
35

Lindgren, Helena. "Decision support in dementia care : developing systems for interactive reasoning." Doctoral thesis, Umeå : Datavetenskap Computing Science, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-1138.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Dhyani, Dushyanta Dhyani. "Boosting Supervised Neural Relation Extraction with Distant Supervision." The Ohio State University, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=osu1524095334803486.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Tanner, Michael Clay. "Explaining knowledge systems : justifying diagnostic conclusions /." The Ohio State University, 1989. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487599963591483.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Al-Jabir, Shaikha. "Terminology-based knowledge acquisition." Thesis, University of Surrey, 1999. http://epubs.surrey.ac.uk/843300/.

Full text
Abstract:
A methodology for knowledge acquisition from terminology databases is presented. The methodology outlines how the content of a terminology database can be mapped onto a knowledge base with a minimum of human intervention. Typically, terms are defined and elaborated by terminologists by using sentences that have a common syntactic and semantic structure. It has been argued that in defining terms, terminologists use a local grammar and that this local grammar can be used to parse the definitions. The methodology has been implemented in a program called DEARSys (Definition Analysis and Representation System), that reads definition sentences and extracts new concepts and conceptual relations about the defined terms. The linguistic component of the system is a parser for the sublanguage of terminology definitions that analyses a definition into its logical form, which in turn is mapped onto a frame-based representation. The logical form is based on first-order logic (FOL) extended with untyped lambda calculus. Our approach is data-driven and domain independent; it has been applied to definitions of various domains. Experiments were conducted with human subjects to evaluate the information acquired by the system. The results of the preliminary evaluation were encouraging.
APA, Harvard, Vancouver, ISO, and other styles
39

Ribeiro, Marcelo Stopanovski. "KMAI - Knowledge Management With Artificial Intelligence gestão do conhecimento com inteligência artificial." Florianópolis, SC, 2003. http://repositorio.ufsc.br/xmlui/handle/123456789/84563.

Full text
Abstract:
Dissertação (mestrado) - Universidade Federal de Santa Catarina, Centro Tecnológico. Programa de Pós-Graduação em Engenharia de Produção.
Made available in DSpace on 2012-10-20T10:28:32Z (GMT). No. of bitstreams: 1 235270.pdf: 2368264 bytes, checksum: 3bf54056ed91e4f3eef2eea08e84ca67 (MD5)
Esse trabalho monográfico remete, em primeira instância, ao processo de fusão operacional entre a Gestão do Conhecimento e a Inteligência com parâmetros condicionais de Inteligência Artificial (KMAI). E em seguida à incorporação prática e teórica de um modelo revolucionário de análise de informações que inicia com uma metodologia de representação do conhecimento suportada por ferramentas próprias (Representação do Conhecimento Contextualizado Dinamicamente - RC2D) e finaliza com algoritmos inteligentes de recuperação de informações (Pesquisa Contextual Estruturada - PCE), passando por uma miríade de tecnologias conhecidas, mas de ponta, de apoio e agregação de valor. O KMAI Knowledge Management with Artificial Intelligence ou Gestão do Conhecimento com Inteligência Artificial é antes de mais nada um conceito. Ele visa ser um diferencial estratégico nas organizações do conhecimento que querem adquirir competitividade através do processamento de informações para a tomada de decisão. Apresentar os fundamentos históricos da ebulição da Sociedade da Informação com destaque para as necessidades e acontecimentos da Segunda Guerra Mundial, bem como, descrever a processo KMAI e suas ferramentas tecnológicas visualizando-as em implantações palpáveis, tornam-se os objetivos dessa dissertação.
APA, Harvard, Vancouver, ISO, and other styles
40

Gkiokas, Alexandros. "Imitation learning in artificial intelligence." Thesis, University of Warwick, 2016. http://wrap.warwick.ac.uk/94683/.

Full text
Abstract:
Acquiring new knowledge often requires an agent or a system to explore, search and discover. Yet us humans build upon the knowledge of our forefathers, as did they, using previous knowledge; there does exist a mechanism which allows transference of knowledge without searching, exploration or discovery. That mechanism is known as imitation and it exists everywhere in nature; in animals, insects, primates, and humans. Enabling artificial, cognitive and software agents to learn by imitation could potentially be crucial to the emergence of the field of autonomous systems, robotics, cyber-physical and software agents. Imitation in AI implies that agents can learn from their human users, other AI agents, through observation or using physical interaction in robotics, and therefore learn a lot faster and easier. Describing an imitation learning framework in AI which uses the Internet as the source of knowledge requires a rather unconventional approach: the procedure is a temporal-sequential process which uses reinforcement based on behaviouristic Psychology, deep learning and a plethora of other Algorithms. Ergo an agent using a hybrid simulating-emulating strategy is formulated, implemented and experimented with. That agent learns from RSS feeds using examples provided by the user; it adheres to previous research work and theoretical foundations and demonstrates that not only is imitation learning in AI possible, but it compares and in some cases outperforms traditional approaches.
APA, Harvard, Vancouver, ISO, and other styles
41

Shankar, Arunprasath. "ONTOLOGY-DRIVEN SEMI-SUPERVISED MODEL FOR CONCEPTUAL ANALYSIS OF DESIGN SPECIFICATIONS." Case Western Reserve University School of Graduate Studies / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=case1401706747.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Bowker, Lynne. "Guidelines for handling multidimensionality in a terminological knowledge base." Thesis, University of Ottawa (Canada), 1992. http://hdl.handle.net/10393/7607.

Full text
Abstract:
The goal of this thesis is to develop and apply a set of guidelines for handling multidimensionality in a terminological knowledge base (TKB). A dimension represents one way of classifying a group of objects; a classification with more than one dimension is said to be multidimensional. The recognition and representation of multidimensionality is a subject that has received very little attention in the terminology literature. One tool for dealing with multidimensionality is CODE (Conceptually Oriented Description Environment). This thesis is divided into four main parts. In Part I, we discuss the general principles of classification, and explain multidimensionality. In Part II, we develop an initial set of guidelines to help terminologists both recognize and represent multidimensionality in a TKB. In Part III, we develop a technical complement to the initial guidelines. We begin with a general description of the CODE system, and then we analyze those features that are particularly helpful for handling multidimensionality. Finally, in Part IV, we apply our guidelines by using the CODE system to construct a small TKB for concepts in a subfield of hypertext, namely hypertext links. (Abstract shortened by UMI.)
APA, Harvard, Vancouver, ISO, and other styles
43

Zhang, Shanshan. "Deep Learning for Unstructured Data by Leveraging Domain Knowledge." Diss., Temple University Libraries, 2019. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/580099.

Full text
Abstract:
Computer and Information Science
Ph.D.
Unstructured data such as texts, strings, images, audios, videos are everywhere due to the social interaction on the Internet and the high-throughput technology in sciences, e.g., chemistry and biology. However, for traditional machine learning algorithms, classifying a text document is far more difficult than classifying a data entry in a spreadsheet. We have to convert the unstructured data into some numeric vectors which can then be understood by machine learning algorithms. For example, a sentence is first converted to a vector of word counts, and then fed into a classification algorithm such as logistic regression and support vector machine. The creation of such numerical vectors is very challenging and difficult. Recent progress in deep learning provides us a new way to jointly learn features and train classifiers for unstructured data. For example, recurrent neural networks proved successful at learning from a sequence of word indices; convolutional neural networks are effective to learn from videos, which are sequences of pixel matrices. Our research focuses on developing novel deep learning approaches for text and graph data. Breakthroughs using deep learning have been made during the last few years for many core tasks in natural language processing, such as machine translation, POS tagging, named entity recognition, etc. However, when it comes to informal and noisy text data, such as tweets, HTMLs, OCR, there are two major issues with modern deep learning technologies. First, deep learning requires large amount of labeled data to train an effective model; second, neural network architectures that work with natural language are not proper with informal text. In this thesis, we address the two important issues and develop new deep learning approaches in four supervised and unsupervised tasks with noisy text. We first present a deep feature engineering approach for informative tweets discovery during the emerging disasters. We propose to use unlabeled microblogs to cluster words into a limited number of clusters and use the word clusters as features for tweets discovery. Our results indicate that when the number of labeled tweets is 100 or less, the proposed approach is superior to the standard classification based on the bag or words feature representation. We then introduce a human-in-the-loop (HIL) framework for entity identification from noisy web text. Our work explores ways to combine the expressive power of REs, ability of deep learning to learn from large data into a new integrated framework for entity identification from web data. The evaluation on several entity identification problems shows that the proposed framework achieves very high accuracy while requiring only a modest human involvement. We further extend the framework of entity identification to an iterative HIL framework that addresses the entity recognition problem. We particularly investigate how human invest their time when a user is allowed to choose between regex construction and manual labeling. Finally, we address a fundamental problem in the text mining domain, i.e, embedding of rare and out-of-vocabulary (OOV) words, by refining word embedding models and character embedding models in an iterative way. We illustrate the simplicity but effectiveness of our method when applying it to online professional profiles allowing noisy user input. Graph neural networks have been shown great success in the domain of drug design and material sciences, where organic molecules and crystal structures of materials are represented as attributed graphs. A deep learning architecture that is capable of learning from graph nodes and graph edges is crucial for property estimation of molecules. In this dissertation, We propose a simple graph representation for molecules and three neural network architectures that is able to directly learn predictive functions from graphs. We discover that, it is true graph networks are superior than feature-driven algorithms for formation energy prediction. However, the superiority can not be reproduced on band gap prediction. We also discovered that our proposed simple shallow neural networks perform comparably with the state-of-the-art deep neural networks.
Temple University--Theses
APA, Harvard, Vancouver, ISO, and other styles
44

Corsar, David. "Developing knowledge-based systems through ontology mapping and ontology guided knowledge acquisition." Thesis, Available from the University of Aberdeen Library and Historic Collections Digital Resources, 2009. http://digitool.abdn.ac.uk:80/webclient/DeliveryManager?application=DIGITOOL-3&owner=resourcediscovery&custom_att_2=simple_viewer&pid=25800.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Boyle, Jean-Marc. "Knowledge-based techniques for multivariable control system design." Thesis, University of Cambridge, 1987. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.329990.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Chui, David Kam Hung. "Artificial intelligence techniques for power system decision problems." Thesis, Queen Mary, University of London, 1989. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.387837.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Kiani, Bobak Toussi. "Quantum artificial intelligence : learning unitary transformations." Thesis, Massachusetts Institute of Technology, 2020. https://hdl.handle.net/1721.1/127158.

Full text
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, Department of Mechanical Engineering, May, 2020
Cataloged from the official PDF of thesis.
Includes bibliographical references (pages 77-83).
Linear algebra is a simple yet elegant mathematical framework that serves as the mathematical bedrock for many scientific and engineering disciplines. Broadly defined as the study of linear equations represented as vectors and matrices, linear algebra provides a mathematical toolbox for manipulating and controlling many physical systems. For example, linear algebra is central to the modeling of quantum mechanical phenomena and machine learning algorithms. Within the broad landscape of matrices studied in linear algebra, unitary matrices stand apart for their special properties, namely that they preserve norms and have easy to calculate inverses. Interpreted from an algorithmic or control setting, unitary matrices are used to describe and manipulate many physical systems.
Relevant to the current work, unitary matrices are commonly studied in quantum mechanics where they formulate the time evolution of quantum states and in artificial intelligence where they provide a means to construct stable learning algorithms by preserving norms. One natural question that arises when studying unitary matrices is how difficult it is to learn them. Such a question may arise, for example, when one would like to learn the dynamics of a quantum system or apply unitary transformations to data embedded into a machine learning algorithm. In this thesis, I examine the hardness of learning unitary matrices both in the context of deep learning and quantum computation. This work aims to both advance our general mathematical understanding of unitary matrices and provide a framework for integrating unitary matrices into classical or quantum algorithms. Different forms of parameterizing unitary matrices, both in the quantum and classical regimes, are compared in this work.
In general, experiments show that learning an arbitrary dxd² unitary matrix requires at least d² parameters in the learning algorithm regardless of the parameterization considered. In classical (non-quantum) settings, unitary matrices can be constructed by composing products of operators that act on smaller subspaces of the unitary manifold. In the quantum setting, there also exists the possibility of parameterizing unitary matrices in the Hamiltonian setting, where it is shown that repeatedly applying two alternating Hamiltonians is sufficient to learn an arbitrary unitary matrix. Finally, I discuss applications of this work in quantum and deep learning settings. For near term quantum computers, applying a desired set of gates may not be efficiently possible. Instead, desired unitary matrices can be learned from a given set of available gates (similar to ideas discussed in quantum controls).
Understanding the learnability of unitary matrices can also aid efforts to integrate unitary matrices into neural networks and quantum deep learning algorithms. For example, deep learning algorithms implemented in quantum computers may leverage parameterizations discussed here to form layers in a quantum learning architecture.
by Bobak Toussi Kiani.
S.M.
S.M. Massachusetts Institute of Technology, Department of Mechanical Engineering
APA, Harvard, Vancouver, ISO, and other styles
48

Hellsten, Mark. "Artificial intelligence and knowledge intensive labour: Evidence from job postings." Thesis, Örebro universitet, Handelshögskolan vid Örebro Universitet, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-92279.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Quantrille, Thomas E. "Prolog and artificial intelligence in chemical engineering." Diss., This resource online, 1991. http://scholar.lib.vt.edu/theses/available/etd-06062008-170029/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Ong, Yew Soon. "Artificial intelligence technologies in complex engineering design." Thesis, University of Southampton, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.273909.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography