Dissertations / Theses on the topic 'Edge computing with artificial intelligence'

To see the other types of publications on this topic, follow the link: Edge computing with artificial intelligence.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Edge computing with artificial intelligence.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Antonini, Mattia. "From Edge Computing to Edge Intelligence: exploring novel design approaches to intelligent IoT applications." Doctoral thesis, Università degli studi di Trento, 2021. http://hdl.handle.net/11572/308630.

Full text
Abstract:
The Internet of Things (IoT) has deeply changed how we interact with our world. Today, smart homes, self-driving cars, connected industries, and wearables are just a few mainstream applications where IoT plays the role of enabling technology. When IoT became popular, Cloud Computing was already a mature technology able to deliver the computing resources necessary to execute heavy tasks (e.g., data analytic, storage, AI tasks, etc.) on data coming from IoT devices, thus practitioners started to design and implement their applications exploiting this approach. However, after a hype that lasted for a few years, cloud-centric approaches have started showing some of their main limitations when dealing with the connectivity of many devices with remote endpoints, like high latency, bandwidth usage, big data volumes, reliability, privacy, and so on. At the same time, a few new distributed computing paradigms emerged and gained attention. Among all, Edge Computing allows to shift the execution of applications at the edge of the network (a partition of the network physically close to data-sources) and provides improvement over the Cloud Computing paradigm. Its success has been fostered by new powerful embedded computing devices able to satisfy the everyday-increasing computing requirements of many IoT applications. Given this context, how can next-generation IoT applications take advantage of the opportunity offered by Edge Computing to shift the processing from the cloud toward the data sources and exploit everyday-more-powerful devices? This thesis provides the ingredients and the guidelines for practitioners to foster the migration from cloud-centric to novel distributed design approaches for IoT applications at the edge of the network, addressing the issues of the original approach. This requires the design of the processing pipeline of applications by considering the system requirements and constraints imposed by embedded devices. To make this process smoother, the transition is split into different steps starting with the off-loading of the processing (including the Artificial Intelligence algorithms) at the edge of the network, then the distribution of computation across multiple edge devices and even closer to data-sources based on system constraints, and, finally, the optimization of the processing pipeline and AI models to efficiently run on target IoT edge devices. Each step has been validated by delivering a real-world IoT application that fully exploits the novel approach. This paradigm shift leads the way toward the design of Edge Intelligence IoT applications that efficiently and reliably execute Artificial Intelligence models at the edge of the network.
APA, Harvard, Vancouver, ISO, and other styles
2

WoldeMichael, Helina Getachew. "Deployment of AI Model inside Docker on ARM-Cortex-based Single-Board Computer : Technologies, Capabilities, and Performance." Thesis, Blekinge Tekniska Högskola, Institutionen för datalogi och datorsystemteknik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-17267.

Full text
Abstract:
IoT has become tremendously popular. It provides information access, processing and connectivity for a huge number of devices or sensors. IoT systems, however, often do not process the information locally, rather send the information to remote locations in the Cloud. As a result, it adds huge amount of data traffic to the network and additional delay to data processing. The later feature might have significant impact on applications that require fast response times, such as sophisticated artificial intelligence (AI) applications including Augmented reality, face recognition, and object detection. Consequently, edge computing paradigm that enables computation of data near the source has gained a significant importance in achieving a fast response time in the recent years. IoT devices can be employed to provide computational resources at the edge of the network near the sensors and actuators. The aim of this thesis work is to design and implement a kind of edge computing concept that brings AI models to a small embedded IoT device by the use of virtualization concepts. The use of virtualization technology enables the easy packing and shipping of applications to different hardware platforms. Additionally, this enable the mobility of AI models between edge devices and the Cloud. We will implement an AI model inside a Docker container, which will be deployed on a FireflyRK3399 single-board computer (SBC). Furthermore, we will conduct CPU and memory performance evaluations of Docker on Firefly-RK3399. The methodology adopted to reach to our goal is experimental research. First, different literatures have been studied to demonstrate by implementation the feasibility of our concept. Then we setup an experiment that covers measurement of performance metrics by applying synthetic load in multiple scenarios. Results are validated by repeating the experiment and statistical analysis. Results of this study shows that, an AI model can successfully be deployed and executed inside a Docker container on Arm-Cortex-based single-board computer. A Docker image of OpenFace face recognition model is built for ARM architecture of the Firefly SBC. On the other hand, the performance evaluation reveals that the performance overhead of Docker in terms of CPU and Memory is negligible. The research work comprises the mechanisms how AI application can be containerized in ARM architecture. We conclude that the methods can be applied to containerize software application in ARM based IoT devices. Furthermore, the insignificant overhead brought by Docker facilitates for deployment of applications inside a container with less performance overhead. The functionality of IoT device i.e. Firefly-RK3399 is exploited in this thesis. It is shown that the device is capable and powerful and gives an insight for further studies.
APA, Harvard, Vancouver, ISO, and other styles
3

Chen, Hsinchun, Jay F. Nunamaker, Richard E. Orwig, and Olga Titkova. "Information Visualization for Collaborative Computing." IEEE, 1998. http://hdl.handle.net/10150/105495.

Full text
Abstract:
Artificial Intelligence Lab, Department of MIS, University of Arizona
A prototype tool classifies output from an electronic meeting system into a manageable list of concepts, topics, or issues that a group can further evaluate. In an experiment with output from GroupSystems electronic meeting system, the tool's recall ability was comparable to that of a human facilitator, but took roughly a sixth of the time.
APA, Harvard, Vancouver, ISO, and other styles
4

Hutson, Matt 1978. "Artificial intelligence and musical creativity : computing Beethoven's tenth." Thesis, Massachusetts Institute of Technology, 2003. http://hdl.handle.net/1721.1/85756.

Full text
Abstract:
Thesis (S.M. in Science Writing)--Massachusetts Institute of Technology, Dept. of Humanities, Program in Writing and Humanistic Studies, 2003.
Includes bibliographical references (p. 47-48).
by Matthew T. Hutson.
S.M.in Science Writing
APA, Harvard, Vancouver, ISO, and other styles
5

Shan, Qingshan. "Artificial intelligence for identifying impacts on smart composites." Thesis, Southampton Solent University, 2004. http://ssudl.solent.ac.uk/600/.

Full text
Abstract:
Identification of low-velocity impacts to composite structures has become increasingly important in the aerospace industry. Knowing when impacts have occurred would allow inspections to be scheduled only when necessary, and knowing the approximate impact location would allow for a localized search, saving time and expense. Additionally, an estimation of the impact magnitude could be used for damage prediction. This study experimentally investigated a methodology for impact identification. To achieve the approach, the following issues were covered in this study: impact detecting; signal processing; feature extractioon; impact identification. In impact detection, them smart stuctures, two piezoelectric sensors embedded in composite structures, are designed to measure impact signals caused by foreign object impact events. The impact signals were stored in computer system memory through the impact monitoring system developed in this study. In signal processing, the cross correlation method was used to process the measured impact signals. This processing built the correlation between the impact signals and location of impacts as well as impact magnitude. In feature extraction, the initial feature data were gained from the cross correlation results through the point and segmentation processing. thie final feature data were selected from the initial feature data with a fuzzy clustering method. In impact identification, the adaptive deuro fuzzy inference systems (ANFIS) were built with the feature data to identify abscissas of impact location, ordinates of impact location and impact magnitude. The parameters of the ANFISs were refined with a hybrid learning rule i.e. the combination of the Least Square Estimation and the Steepest Descent algorithm. Real time software developed in Visual Basic code manipulated the monitoring and control system for the impact experiments. Also a software package developed with MATLAB, implemented the impact identification and the system simulation.
APA, Harvard, Vancouver, ISO, and other styles
6

Tsui, Kwok Ching. "Neural network design using evolutionary computing." Thesis, King's College London (University of London), 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.299918.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Samikwa, Eric. "Flood Prediction System Using IoT and Artificial Neural Networks with Edge Computing." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-280299.

Full text
Abstract:
Flood disasters affect millions of people across the world by causing severe loss of life and colossal damage to property. Internet of things (IoT) has been applied in areas such as flood prediction, flood monitoring, flood detection, etc. Although IoT technologies cannot stop the occurrence of flood disasters, they are exceptionally valuable apparatus for conveyance of catastrophe readiness and counteractive action data. Advances have been made in flood prediction using artificial neural networks (ANN). Despite the various advancements in flood prediction systems through the use of ANN, there has been less focus on the utilisation of edge computing for improved efficiency and reliability of such systems. In this thesis, a system for short-term flood prediction that uses IoT and ANN, where the prediction computation is carried out on a low power edge device is proposed. The system monitors real-time rainfall and water level sensor data and predicts ahead of time flood water levels using long short-term memory. The system can be deployed on battery power as it uses low power IoT devices and communication technology. The results of evaluating a prototype of the system indicate a good performance in terms of flood prediction accuracy and response time. The application of ANN with edge computing will help improve the efficiency of real-time flood early warning systems by bringing the prediction computation close to where data is collected.
Översvämningar drabbar miljontals människor över hela världen genom att orsaka dödsfall och förstöra egendom. Sakernas Internet (IoT) har använts i områden som översvämnings förutsägelse, översvämnings övervakning, översvämning upptäckt, etc. Även om IoT-teknologier inte kan stoppa förekomsten av översvämningar, så är de mycket användbara när det kommer till transport av katastrofberedskap och motverkande handlingsdata. Utveckling har skett när det kommer till att förutspå översvämningar med hjälp av artificiella neuronnät (ANN). Trots de olika framstegen inom system för att förutspå översvämningar genom ANN, så har det varit mindre fokus på användningen av edge computing vilket skulle kunna förbättra effektivitet och tillförlitlighet. I detta examensarbete föreslås ett system för kortsiktig översvämningsförutsägelse genom IoT och ANN, där gissningsberäkningen utförs över en låg effekt edge enhet. Systemet övervakar sensordata från regn och vattennivå i realtid och förutspår översvämningsvattennivåer i förtid genom att använda långt korttidsminne. Systemet kan köras på batteri eftersom det använder låg effekt IoT-enheter och kommunikationsteknik. Resultaten från en utvärdering av en prototyp av systemet indikerar en bra prestanda när det kommer till noggrannhet att förutspå översvämningar och responstid. Användningen av ANN med edge computing kommer att förbättra effektiviteten av tidiga varningssystem för översvämningar i realtid genom att ta gissningsberäkningen närmare till där datan samlas.
APA, Harvard, Vancouver, ISO, and other styles
8

Na, Jongwhoa. "Design and simulation of digital optical computing systems for artificial intelligence." Diss., The University of Arizona, 1994. http://hdl.handle.net/10150/186989.

Full text
Abstract:
Rule-based systems (RBSs) are one of the problem solving methodologies in artificial intelligence. Although RBSs have a vast potential in many application areas the slow execution speed of current RBSs has prohibited them from the full exploitation of their vast potential. In this dissertation, to improve the speed of RBSs, we explore the use of optics for the fast and parallel RBS architectures. First, we propose an electro-optical rule-based system (EORBS). Using two-dimensional knowledge representation and a monotonic reasoning scheme, EORBS provides highly efficient implementation of the basic operations needed in rule-based systems, namely, matching, selection, and rule firing. The execution speed of the proposed system is theoretically estimated and is shown to be two orders of magnitude faster than the current electronic systems. Although EORBS shows the best performance in execution speed compared to other RBSs, the monotonic reasoning scheme restricts the application domains of EORBS. In order to overcome this limitation on the application domain in EORBS, a general purpose RBS, called an Optical Content-Addressable Parallel Processor for Expert Systems (OCAPP-ES) is proposed. Using a general knowledge representation scheme and a parallel conflict resolution scheme, OCAPP-ES executes the three basic RBS operations on general knowledge (including variables, symbols, and numbers) in a highly parallel fashion. The performance of OCAPP-ES is theoretically estimated and is shown to be an order of magnitude slower than that of EORBS. However, the performance of OCAPP-ES is still an order of magnitude faster than any other RBS. Furthermore, OCAPP-ES is designed to support the general knowledge representation scheme so that it can be a high speed general purpose RBS. To verify the proposed architectures, we developed a modeling and simulation methodology for digital optical computing systems. The methodology predicts maximum performance of a given optical computing architecture and evaluates its feasibility. As an application example, we apply this methodology to evaluate the feasibility and performance of OCAPP which is the optical match unit of OCAPP-ES. The proposed methodology is intended to reduce optical computing systems' design time as well as the design risk associated with building a prototype system.
APA, Harvard, Vancouver, ISO, and other styles
9

Wagy, Mark David. "Enabling Machine Science through Distributed Human Computing." ScholarWorks @ UVM, 2016. http://scholarworks.uvm.edu/graddis/618.

Full text
Abstract:
Distributed human computing techniques have been shown to be effective ways of accessing the problem-solving capabilities of a large group of anonymous individuals over the World Wide Web. They have been successfully applied to such diverse domains as computer security, biology and astronomy. The success of distributed human computing in various domains suggests that it can be utilized for complex collaborative problem solving. Thus it could be used for "machine science": utilizing machines to facilitate the vetting of disparate human hypotheses for solving scientific and engineering problems. In this thesis, we show that machine science is possible through distributed human computing methods for some tasks. By enabling anonymous individuals to collaborate in a way that parallels the scientific method -- suggesting hypotheses, testing and then communicating them for vetting by other participants -- we demonstrate that a crowd can together define robot control strategies, design robot morphologies capable of fast-forward locomotion and contribute features to machine learning models for residential electric energy usage. We also introduce a new methodology for empowering a fully automated robot design system by seeding it with intuitions distilled from the crowd. Our findings suggest that increasingly large, diverse and complex collaborations that combine people and machines in the right way may enable problem solving in a wide range of fields.
APA, Harvard, Vancouver, ISO, and other styles
10

Hasanaj, Enis, Albert Aveler, and William Söder. "Cooperative edge deepfake detection." Thesis, Jönköping University, JTH, Avdelningen för datateknik och informatik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-53790.

Full text
Abstract:
Deepfakes are an emerging problem in social media and for celebrities and political profiles, it can be devastating to their reputation if the technology ends up in the wrong hands. Creating deepfakes is becoming increasingly easy. Attempts have been made at detecting whether a face in an image is real or not but training these machine learning models can be a very time-consuming process. This research proposes a solution to training deepfake detection models cooperatively on the edge. This is done in order to evaluate if the training process, among other things, can be made more efficient with this approach.  The feasibility of edge training is evaluated by training machine learning models on several different types of iPhone devices. The models are trained using the YOLOv2 object detection system.  To test if the YOLOv2 object detection system is able to distinguish between real and fake human faces in images, several models are trained on a computer. Each model is trained with either different number of iterations or different subsets of data, since these metrics have been identified as important to the performance of the models. The performance of the models is evaluated by measuring the accuracy in detecting deepfakes.  Additionally, the deepfake detection models trained on a computer are ensembled using the bagging ensemble method. This is done in order to evaluate the feasibility of cooperatively training a deepfake detection model by combining several models.  Results show that the proposed solution is not feasible due to the time the training process takes on each mobile device. Additionally, each trained model is about 200 MB, and the size of the ensemble model grows linearly by each model added to the ensemble. This can cause the ensemble model to grow to several hundred gigabytes in size.
APA, Harvard, Vancouver, ISO, and other styles
11

Abd, Gaus Yona Falinie. "Artificial intelligence system for continuous affect estimation from naturalistic human expressions." Thesis, Brunel University, 2018. http://bura.brunel.ac.uk/handle/2438/16348.

Full text
Abstract:
The analysis and automatic affect estimation system from human expression has been acknowledged as an active research topic in computer vision community. Most reported affect recognition systems, however, only consider subjects performing well-defined acted expression, in a very controlled condition, so they are not robust enough for real-life recognition tasks with subject variation, acoustic surrounding and illumination change. In this thesis, an artificial intelligence system is proposed to continuously (represented along a continuum e.g., from -1 to +1) estimate affect behaviour in terms of latent dimensions (e.g., arousal and valence) from naturalistic human expressions. To tackle the issues, feature representation and machine learning strategies are addressed. In feature representation, human expression is represented by modalities such as audio, video, physiological signal and text modality. Hand- crafted features is extracted from each modality per frame, in order to match with consecutive affect label. However, the features extracted maybe missing information due to several factors such as background noise or lighting condition. Haar Wavelet Transform is employed to determine if noise cancellation mechanism in feature space should be considered in the design of affect estimation system. Other than hand-crafted features, deep learning features are also analysed in terms of the layer-wise; convolutional and fully connected layer. Convolutional Neural Network such as AlexNet, VGGFace and ResNet has been selected as deep learning architecture to do feature extraction on top of facial expression images. Then, multimodal fusion scheme is applied by fusing deep learning feature and hand-crafted feature together to improve the performance. In machine learning strategies, two-stage regression approach is introduced. In the first stage, baseline regression methods such as Support Vector Regression are applied to estimate each affect per time. Then in the second stage, subsequent model such as Time Delay Neural Network, Long Short-Term Memory and Kalman Filter is proposed to model the temporal relationships between consecutive estimation of each affect. In doing so, the temporal information employed by a subsequent model is not biased by high variability present in consecutive frame and at the same time, it allows the network to exploit the slow changing dynamic between emotional dynamic more efficiently. Following of two-stage regression approach for unimodal affect analysis, fusion information from different modalities is elaborated. Continuous emotion recognition in-the-wild is leveraged by investigating mathematical modelling for each emotion dimension. Linear Regression, Exponent Weighted Decision Fusion and Multi-Gene Genetic Programming are implemented to quantify the relationship between each modality. In summary, the research work presented in this thesis reveals a fundamental approach to automatically estimate affect value continuously from naturalistic human expression. The proposed system, which consists of feature smoothing, deep learning feature, two-stage regression framework and fusion using mathematical equation between modalities is demonstrated. It offers strong basis towards the development artificial intelligent system on estimation continuous affect estimation, and more broadly towards building a real-time emotion recognition system for human-computer interaction.
APA, Harvard, Vancouver, ISO, and other styles
12

Zhao, Haixiang. "Artificial Intelligence Models for Large Scale Buildings Energy Consumption Analysis." Phd thesis, Ecole Centrale Paris, 2011. http://tel.archives-ouvertes.fr/tel-00658767.

Full text
Abstract:
The energy performance in buildings is influenced by many factors, such as ambient weather conditions, building structure and characteristics, occupancy and their behaviors, the operation of sub-level components like Heating, Ventilation and Air-Conditioning (HVAC) system. This complex property makes the prediction, analysis, or fault detection/diagnosis of building energy consumption very difficult to accurately and quickly perform. This thesis mainly focuses on up-to-date artificial intelligence models with the applications to solve these problems. First, we review recently developed models for solving these problems, including detailed and simplified engineering methods, statistical methods and artificial intelligence methods. Then we simulate energy consumption profiles for single and multiple buildings, and based on these datasets, support vector machine models are trained and tested to do the prediction. The results from extensive experiments demonstrate high prediction accuracy and robustness of these models. Second, Recursive Deterministic Perceptron (RDP) neural network model is used to detect and diagnose faulty building energy consumption. The abnormal consumption is simulated by manually introducing performance degradation to electric devices. In the experiment, RDP model shows very high detection ability. A new approach is proposed to diagnose faults. It is based on the evaluation of RDP models, each of which is able to detect an equipment fault.Third, we investigate how the selection of subsets of features influences the model performance. The optimal features are selected based on the feasibility of obtaining them and on the scores they provide under the evaluation of two filter methods. Experimental results confirm the validity of the selected subset and show that the proposed feature selection method can guarantee the model accuracy and reduces the computational time.One challenge of predicting building energy consumption is to accelerate model training when the dataset is very large. This thesis proposes an efficient parallel implementation of support vector machines based on decomposition method for solving such problems. The parallelization is performed on the most time-consuming work of training, i.e., to update the gradient vector f. The inner problems are dealt by sequential minimal optimization solver. The underlying parallelism is conducted by the shared memory version of Map-Reduce paradigm, making the system particularly suitable to be applied to multi-core and multiprocessor systems. Experimental results show that our implementation offers a high speed increase compared to Libsvm, and it is superior to the state-of-the-art MPI implementation Pisvm in both speed and storage requirement.
APA, Harvard, Vancouver, ISO, and other styles
13

An, Hongyu. "Powering Next-Generation Artificial Intelligence by Designing Three-dimensional High-Performance Neuromorphic Computing System with Memristors." Diss., Virginia Tech, 2020. http://hdl.handle.net/10919/101838.

Full text
Abstract:
Human brains can complete numerous intelligent tasks, such as pattern recognition, reasoning, control and movement, with remarkable energy efficiency (20 W). In contrast, a typical computer only recognizes 1,000 different objects but consumes about 250 W power [1]. This performance significant differences stem from the intrinsic different structures of human brains and digital computers. The latest discoveries in neuroscience indicate the capabilities of human brains are attributed to three unique features: (1) neural network structure; (2) spike-based signal representation; (3) synaptic plasticity and associative memory learning [1, 2]. In this dissertation, the next-generation platform of artificial intelligence is explored by utilizing memristors to design a three-dimensional high-performance neuromorphic computing system. The low-variation memristors (fabricated by Virginia Tech) reduce the learning accuracy of the system significantly through adding heat dissipation layers. Moreover, three emerging neuromorphic architectures are proposed showing a path to realizing the next-generation platform of artificial intelligence with self-learning capability and high energy efficiency. At last, an Associative Memory Learning System is exhibited to reproduce an associative memory learning that remembers and correlates two concurrent events (pronunciation and shape of digits) together.
Doctor of Philosophy
In this dissertation, the next-generation platform of artificial intelligence is explored by utilizing memristors to design a three-dimensional high-performance neuromorphic computing system. The low-variation memristors (fabricated by Virginia Tech) reduce the learning accuracy of the system significantly through adding heat dissipation layers. Moreover, three emerging neuromorphic architectures are proposed showing a path to realizing the next-generation platform of artificial intelligence with self-learning capability and high energy efficiency. At last, an Associative Memory Learning System is exhibited to reproduce an associative memory learning that remembers and correlates two concurrent events (pronunciation and shape of digits) together.
APA, Harvard, Vancouver, ISO, and other styles
14

Beiko, Robert G. "Evolutionary computing strategies for the detection of conserved patterns in genomic DNA." Thesis, University of Ottawa (Canada), 2003. http://hdl.handle.net/10393/29009.

Full text
Abstract:
The detection of regulatory sequences in DNA is a challenging problem, especially when considered in the context of whole genomes. The degree of sequence conservation of regulatory protein binding sites is often weak, and the sites are obscured by surrounding intergenic sequence. Since structural interactions are vital for protein-DNA interactions, structural representations of regulatory sites can yield a more accurate model and a better understanding of within-site variability. However, the use of multiple alternative representations of DNA introduces a requirement for novel algorithms that can create and test different combinations of DNA features. The Genetic Algorithm Neural Network (GANN) was designed to identify combinations of patterns that can be used to distinguish between different classes of training sequence. GANN trains a set of artificial neural networks to classify sets of sequence using either backpropagation or a genetic algorithm, and uses an 'outer genetic algorithm' to choose the best inputs from a pool of DNA features that can include sequence, structure, and weight matrix representations. When trained with a subset of upstream sequences from a whole genome, GANN was able to detect patterns such as the Shine-Dalgarno sequence in Escherichia coli K12, and sequences consistent with archaeal promoters in the archaeon Sulfolobus solfataricus P2. The Motif Genetic Algorithm (MGA) constructs motif representations by concatenating minimal units of DNA sequence and structure. This algorithm was used to model conserved patterns in DNA, including the binding sites for E. coli cyclic AMP activated protein (CAP), integration host factor (IHF), and two different promoter types recognized by alternative bacterial sigma factors. The CAP models were used to detect other putative binding sites in upstream regions of the E. coli K12 genome, while attempts to train an accurate model of IHF binding sites revealed an important role for structural representations in motif modeling.
APA, Harvard, Vancouver, ISO, and other styles
15

Lanzarone, Lorenzo Biagio. "Manutenzione predittiva di macchinari industriali tramite tecniche di intelligenza artificiale: una valutazione sperimentale." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/22853/.

Full text
Abstract:
Nella società è in corso un processo di evoluzione tecnologica, il quale sviluppa una connessione tra l’ambiente fisico e l’ambiente digitale, per scambiare dati e informazioni. Nella presente tesi si approfondisce, nel contesto dell’Industria 4.0, la tematica della manutenzione predittiva di macchinari industriali tramite tecniche di intelligenza artificiale, per prevedere in anticipo il verificarsi di un imminente guasto, identificandolo prima ancora che si possa verificare. La presente tesi è divisa in due parti complementari, nella prima parte si approfondiscono gli aspetti teorici relativi al contesto e allo stato dell’arte, mentre nella seconda parte gli aspetti pratici e progettuali. In particolare, la prima parte è dedicata a fornire una panoramica sull’Industria 4.0 e su una sua applicazione, rappresentata dalla manutenzione predittiva. Successivamente vengono affrontate le tematiche inerenti l’intelligenza artificiale e la Data Science, tramite le quali è possibile applicare la manutenzione predittiva. Nella seconda parte invece, si propone un progetto pratico, ossia il lavoro da me svolto durante un tirocinio presso la software house Open Data di Funo di Argelato (Bologna). L’obiettivo del progetto è stato la realizzazione di un sistema informatico di manutenzione predittiva di macchinari industriali per lo stampaggio plastico a iniezione, utilizzando tecniche di intelligenza artificiale. Il fine ultimo è l’integrazione di tale sistema all’interno del software Opera MES sviluppato dall’azienda.
APA, Harvard, Vancouver, ISO, and other styles
16

Maripi, Jagadish Kumar. "AN EFFECTIVE PARALLEL PARTICLE SWARM OPTIMIZATION ALGORITHM AND ITS PERFORMANCE EVALUATION." OpenSIUC, 2010. https://opensiuc.lib.siu.edu/theses/275.

Full text
Abstract:
Population-based global optimization algorithms including Particle Swarm Optimization (PSO) have become popular for solving multi-optima problems much more efficiently than the traditional mathematical techniques. In this research, we present and evaluate a new parallel PSO algorithm that provides a significant performance improvement as compared to the serial PSO algorithm. Instead of merely assigning parts of the task of serial version to several processors, the new algorithm places multiple swarms on the available nodes in which operate independently, while collaborating on the same task. With the reduction of the communication bottleneck as well the ability to manipulate the individual swarms independently, the proposed approach outperforms the original PSO algorithm and still maintains the simplicity and ease of implementation.
APA, Harvard, Vancouver, ISO, and other styles
17

Caversan, Fábio Lopes. "Exploração de relações entre técnicas simbólicas e conexionistas da inteligência computacional." Universidade de São Paulo, 2006. http://www.teses.usp.br/teses/disponiveis/3/3141/tde-05092006-120232/.

Full text
Abstract:
Este trabalho consiste em uma contribuição à área de Inteligência Computacional, no que tange a algumas de suas principais técnicas: Computação Nebulosa e Computação Neural. Estas técnicas vêm sendo utilizadas para obter-se soluções de problemas que se apresentam complexos demais para a abordagem algorítmica ou modelagem matemática tradicionais. Entretanto, estes problemas são solucionados de forma trivial pelo aparato que compõe a chamada inteligência humana. A existência de relações, regras e transformações capazes de transferir modelos de problemas de um domínio para outro, traz grandes vantagens para a área de Inteligência Computacional. Teorias e modelos bem estabelecidos em uma das técnicas podem ser utilizados em outras, como por exemplo, os diversos métodos de aprendizado de Computação Neural e a capacidade de utilização de conhecimento especialista de Computação Nebulosa. Problemas modelados classicamente em uma técnica podem ser vistos à luz de outra, possibilitando uma melhor compreensão e otimização das soluções. É realizada uma exploração destas relações. São abordados alguns trabalhos anteriores que indicam a existência de algumas relações, e propostos alguns modelos para desenvolver o trabalho de pesquisa. Uma plataforma para realização de simulações e coleta de dados empíricos para as explorações é especificada. Parte da plataforma foi implementada, e simulações de uma transformação de modelos nebulosos para neurais foram realizadas. Os resultados destes experimentos são apresentados.
This work consists of a contribution to the area of Computational Intelligence, relating to some of its main techniques: Fuzzy Computing and Neural Computing. These techniques are being used to solve problems that are too complex for traditional algorithmic approach or mathematical modeling. However, these problems are solved easly with the apparatus that composes the so-called human intelligence. The existence of relations, rules and transformations capable to transfer problems models from a domain to another, brings great advantages for the area of Computational Intelligence. Well established theories and models in one of the techniques can be used in others, for example, the various learning methods from Neural Computing and the use of expert knowledge capacity of Fuzzy Computing. Problems classically modeled in one technique can be seen from another point of view, possibiliting a better understanding and optimization of the solutions. An exploration of these relations is accomplished. Some previous works indicating the existence of some relations and models to develop the research work are presented. A platform for simulation and empirical data collection, for the explorations, is specified. Part of the platform was implemented, and simulations of a transformation from fuzzy to neural models had been carried through. The results of these experiments are presented.
APA, Harvard, Vancouver, ISO, and other styles
18

Gao, Zhenning. "Parallel and Distributed Implementation of A Multilayer Perceptron Neural Network on A Wireless Sensor Network." University of Toledo / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1383764269.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Dobrucki, Mikołaj. "Applications of Artificial Intelligence in Lighting Systems for Home Environments." Thesis, Malmö universitet, Fakulteten för kultur och samhälle (KS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-23548.

Full text
Abstract:
Artificial Intelligence, being recently one of the most popular topics in technology, has been in a spotlight of Interaction Design for a long time. Despite its success in software and business-oriented cases, the adoption of Artificial Intelligence solutions in home environments still remains relatively low. This study reflects on the key reasons for the low penetration of AI-based solutions in private households and formulates design considerations for possible further developments in this area with a focus on artificial light sources. The design considerations are based on literature review and studies of multiple home environments gathered through qualitative interviews and context mapping exercises. Health influence of lighting, multi-user interactions, and privacy-related and ethical concerns are taken into account as the key factors. The considerations have been validated with participants of the study through user testing sessions of a digital prototype that virtualises a home environment and explores some of the common light usage scenarios. The study argues that despite multiple efforts in this direction during the past three decades, the future of Artificial Intelligence in connected, intelligent homes does not lie in smart, autonomous systems. Instead, Artificial Intelligence can be arguably used to simplify and contextualise interactions between humans and their home environments as well as foster the development of parametric solutions for private households.
APA, Harvard, Vancouver, ISO, and other styles
20

Rahman, Hasibur. "Distributed Intelligence-Assisted Autonomic Context-Information Management : A context-based approach to handling vast amounts of heterogeneous IoT data." Doctoral thesis, Stockholms universitet, Institutionen för data- och systemvetenskap, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-149513.

Full text
Abstract:
As an implication of rapid growth in Internet-of-Things (IoT) data, current focus has shifted towards utilizing and analysing the data in order to make sense of the data. The aim of which is to make instantaneous, automated, and informed decisions that will drive the future IoT. This corresponds to extracting and applying knowledge from IoT data which brings both a substantial challenge and high value. Context plays an important role in reaping value from data, and is capable of countering the IoT data challenges. The management of heterogeneous contextualized data is infeasible and insufficient with the existing solutions which mandates new solutions. Research until now has mostly concentrated on providing cloud-based IoT solutions; among other issues, this promotes real-time and faster decision-making issues. In view of this, this dissertation undertakes a study of a context-based approach entitled Distributed intelligence-assisted Autonomic Context Information Management (DACIM), the purpose of which is to efficiently (i) utilize and (ii) analyse IoT data. To address the challenges and solutions with respect to enabling DACIM, the dissertation starts with proposing a logical-clustering approach for proper IoT data utilization. The environment that the number of Things immerse changes rapidly and becomes dynamic. To this end, self-organization has been supported by proposing self-* algorithms that resulted in 10 organized Things per second and high accuracy rate for Things joining. IoT contextualized data further requires scalable dissemination which has been addressed by a Publish/Subscribe model, and it has been shown that high publication rate and faster subscription matching are realisable. The dissertation ends with the proposal of a new approach which assists distribution of intelligence with regard to analysing context information to alleviate intelligence of things. The approach allows to bring few of the application of knowledge from the cloud to the edge; where edge based solution has been facilitated with intelligence that enables faster responses and reduced dependency on the rules by leveraging artificial intelligence techniques. To infer knowledge for different IoT applications closer to the Things, a multi-modal reasoner has been proposed which demonstrates faster response. The evaluations of the designed and developed DACIM gives promising results, which are distributed over seven publications; from this, it can be concluded that it is feasible to realize a distributed intelligence-assisted context-based approach that contribute towards autonomic context information management in the ever-expanding IoT realm.

At the time of the doctoral defense, the following paper was unpublished and had a status as follows: Paper 7: Submitted.

APA, Harvard, Vancouver, ISO, and other styles
21

Gui, Feng. "Development of a New Client-Server Architecture for Context Aware Mobile Computing." FIU Digital Commons, 2009. http://digitalcommons.fiu.edu/etd/202.

Full text
Abstract:
This dissertation studies the context-aware application with its proposed algorithms at client side. The required context-aware infrastructure is discussed in depth to illustrate that such an infrastructure collects the mobile user’s context information, registers service providers, derives mobile user’s current context, distributes user context among context-aware applications, and provides tailored services. The approach proposed tries to strike a balance between the context server and mobile devices. The context acquisition is centralized at the server to ensure the usability of context information among mobile devices, while context reasoning remains at the application level. Hence, a centralized context acquisition and distributed context reasoning are viewed as a better solution overall. The context-aware search application is designed and implemented at the server side. A new algorithm is proposed to take into consideration the user context profiles. By promoting feedback on the dynamics of the system, any prior user selection is now saved for further analysis such that it may contribute to help the results of a subsequent search. On the basis of these developments at the server side, various solutions are consequently provided at the client side. A proxy software-based component is set up for the purpose of data collection. This research endorses the belief that the proxy at the client side should contain the context reasoning component. Implementation of such a component provides credence to this belief in that the context applications are able to derive the user context profiles. Furthermore, a context cache scheme is implemented to manage the cache on the client device in order to minimize processing requirements and other resources (bandwidth, CPU cycle, power). Java and MySQL platforms are used to implement the proposed architecture and to test scenarios derived from user’s daily activities. To meet the practical demands required of a testing environment without the impositions of a heavy cost for establishing such a comprehensive infrastructure, a software simulation using a free Yahoo search API is provided as a means to evaluate the effectiveness of the design approach in a most realistic way. The integration of Yahoo search engine into the context-aware architecture design proves how context aware application can meet user demands for tailored services and products in and around the user’s environment. The test results show that the overall design is highly effective,providing new features and enriching the mobile user’s experience through a broad scope of potential applications.
APA, Harvard, Vancouver, ISO, and other styles
22

Melandri, Luca. "Introduction to Reservoir Computing Methods." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2015. http://amslaurea.unibo.it/8268/.

Full text
Abstract:
Il documento tratta la famiglia di metodologie di allenamento e sfruttamento delle reti neurali ricorrenti nota sotto il nome di Reservoir Computing. Viene affrontata un'introduzione sul Machine Learning in generale per fornire tutti gli strumenti necessari a comprendere l'argomento. Successivamente, vengono dati dettagli implementativi ed analisi dei vantaggi e punti deboli dei vari approcci, il tutto con supporto di codice ed immagini esplicative. Nel finale vengono tratte conclusioni sugli approcci, su quanto migliorabile e sulle applicazioni pratiche.
APA, Harvard, Vancouver, ISO, and other styles
23

Dwivedi, Y. K., L. Hughes, Elvira Ismagilova, G. Aarts, C. Coombs, T. Crick, Y. Duan, et al. "Artificial Intelligence (AI): Multidisciplinary Perspectives on Emerging Challenges, Opportunities, and Agenda for Research, Practice and Policy." Elsevier, 2019. http://hdl.handle.net/10454/17208.

Full text
Abstract:
Yes
As far back as the industrial revolution, significant development in technical innovation has succeeded in transforming numerous manual tasks and processes that had been in existence for decades where humans had reached the limits of physical capacity. Artificial Intelligence (AI) offers this same transformative potential for the augmentation and potential replacement of human tasks and activities within a wide range of industrial, intellectual and social applications. The pace of change for this new AI technological age is staggering, with new breakthroughs in algorithmic machine learning and autonomous decision-making, engendering new opportunities for continued innovation. The impact of AI could be significant, with industries ranging from: finance, healthcare, manufacturing, retail, supply chain, logistics and utilities, all potentially disrupted by the onset of AI technologies. The study brings together the collective insight from a number of leading expert contributors to highlight the significant opportunities, realistic assessment of impact, challenges and potential research agenda posed by the rapid emergence of AI within a number of domains: business and management, government, public sector, and science and technology. This research offers significant and timely insight to AI technology and its impact on the future of industry and society in general, whilst recognising the societal and industrial influence on pace and direction of AI development.
APA, Harvard, Vancouver, ISO, and other styles
24

Püschel, Georg, and Frank J. Furrer. "Cognitive Computing: Collected Papers." Technische Universität Dresden, 2015. https://tud.qucosa.de/id/qucosa%3A28990.

Full text
Abstract:
Cognitive Computing' has initiated a new era in computer science. Cognitive computers are not rigidly programmed computers anymore, but they learn from their interactions with humans, from the environment and from information. They are thus able to perform amazing tasks on their own, such as driving a car in dense traffic, piloting an aircraft in difficult conditions, taking complex financial investment decisions, analysing medical-imaging data, and assist medical doctors in diagnosis and therapy. Cognitive computing is based on artificial intelligence, image processing, pattern recognition, robotics, adaptive software, networks and other modern computer science areas, but also includes sensors and actuators to interact with the physical world. Cognitive computers – also called 'intelligent machines' – are emulating the human cognitive, mental and intellectual capabilities. They aim to do for human mental power (the ability to use our brain in understanding and influencing our physical and information environment) what the steam engine and combustion motor did for muscle power. We can expect a massive impact of cognitive computing on life and work. Many modern complex infrastructures, such as the electricity distribution grid, railway networks, the road traffic structure, information analysis (big data), the health care system, and many more will rely on intelligent decisions taken by cognitive computers. A drawback of cognitive computers will be a shift in employment opportunities: A raising number of tasks will be taken over by intelligent machines, thus erasing entire job categories (such as cashiers, mail clerks, call and customer assistance centres, taxi and bus drivers, pilots, grid operators, air traffic controllers, …). A possibly dangerous risk of cognitive computing is the threat by “super intelligent machines” to mankind. As soon as they are sufficiently intelligent, deeply networked and have access to the physical world they may endanger many areas of human supremacy, even possibly eliminate humans. Cognitive computing technology is based on new software architectures – the “cognitive computing architectures”. Cognitive architectures enable the development of systems that exhibit intelligent behaviour.:Introduction 5 1. Applying the Subsumption Architecture to the Genesis Story Understanding System – A Notion and Nexus of Cognition Hypotheses (Felix Mai) 9 2. Benefits and Drawbacks of Hardware Architectures Developed Specifically for Cognitive Computing (Philipp Schröppe)l 19 3. Language Workbench Technology For Cognitive Systems (Tobias Nett) 29 4. Networked Brain-based Architectures for more Efficient Learning (Tyler Butler) 41 5. Developing Better Pharmaceuticals – Using the Virtual Physiological Human (Ben Blau) 51 6. Management of existential Risks of Applications leveraged through Cognitive Computing (Robert Richter) 61
APA, Harvard, Vancouver, ISO, and other styles
25

Kulkarni, Manjari S. "Memristor-based Reservoir Computing." PDXScholar, 2012. https://pdxscholar.library.pdx.edu/open_access_etds/899.

Full text
Abstract:
In today's nanoscale era, scaling down to even smaller feature sizes poses a significant challenge in the device fabrication, the circuit, and the system design and integration. On the other hand, nanoscale technology has also led to novel materials and devices with unique properties. The memristor is one such emergent nanoscale device that exhibits non-linear current-voltage characteristics and has an inherent memory property, i.e., its current state depends on the past. Both the non-linear and the memory property of memristors have the potential to enable solving spatial and temporal pattern recognition tasks in radically different ways from traditional binary transistor-based technology. The goal of this thesis is to explore the use of memristors in a novel computing paradigm called "Reservoir Computing" (RC). RC is a new paradigm that belongs to the class of artificial recurrent neural networks (RNN). However, it architecturally differs from the traditional RNN techniques in that the pre-processor (i.e., the reservoir) is made up of random recurrently connected non-linear elements. Learning is only implemented at the readout (i.e., the output) layer, which reduces the learning complexity significantly. To the best of our knowledge, memristors have never been used as reservoir components. We use pattern recognition and classification tasks as benchmark problems. Real world applications associated with these tasks include process control, speech recognition, and signal processing. We have built a software framework, RCspice (Reservoir Computing Simulation Program with Integrated Circuit Emphasis), for this purpose. The framework allows to create random memristor networks, to simulate and evaluate them in Ngspice, and to train the readout layer by means of Genetic Algorithms (GA). We have explored reservoir-related parameters, such as the network connectivity and the reservoir size along with the GA parameters. Our results show that we are able to efficiently and robustly classify time-series patterns using memristor-based dynamical reservoirs. This presents an important step towards computing with memristor-based nanoscale systems.
APA, Harvard, Vancouver, ISO, and other styles
26

Dai, Jing. "Reservoir-computing-based, biologically inspired artificial neural networks and their applications in power systems." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/47646.

Full text
Abstract:
Computational intelligence techniques, such as artificial neural networks (ANNs), have been widely used to improve the performance of power system monitoring and control. Although inspired by the neurons in the brain, ANNs are largely different from living neuron networks (LNNs) in many aspects. Due to the oversimplification, the huge computational potential of LNNs cannot be realized by ANNs. Therefore, a more brain-like artificial neural network is highly desired to bridge the gap between ANNs and LNNs. The focus of this research is to develop a biologically inspired artificial neural network (BIANN), which is not only biologically meaningful, but also computationally powerful. The BIANN can serve as a novel computational intelligence tool in monitoring, modeling and control of the power systems. A comprehensive survey of ANNs applications in power system is presented. It is shown that novel types of reservoir-computing-based ANNs, such as echo state networks (ESNs) and liquid state machines (LSMs), have stronger modeling capability than conventional ANNs. The feasibility of using ESNs as modeling and control tools is further investigated in two specific power system applications, namely, power system nonlinear load modeling for true load harmonic prediction and the closed-loop control of active filters for power quality assessment and enhancement. It is shown that in both applications, ESNs are capable of providing satisfactory performances with low computational requirements. A novel, more brain-like artificial neural network, i.e. biologically inspired artificial neural network (BIANN), is proposed in this dissertation to bridge the gap between ANNs and LNNs and provide a novel tool for monitoring and control in power systems. A comprehensive survey of the spiking models of living neurons as well as the coding approaches is presented to review the state-of-the-art in BIANN research. The proposed BIANNs are based on spiking models of living neurons with adoption of reservoir-computing approaches. It is shown that the proposed BIANNs have strong modeling capability and low computational requirements, which makes it a perfect candidate for online monitoring and control applications in power systems. BIANN-based modeling and control techniques are also proposed for power system applications. The proposed modeling and control schemes are validated for the modeling and control of a generator in a single-machine infinite-bus system under various operating conditions and disturbances. It is shown that the proposed BIANN-based technique can provide better control of the power system to enhance its reliability and tolerance to disturbances. To sum up, a novel, more brain-like artificial neural network, i.e. biologically inspired artificial neural network (BIANN), is proposed in this dissertation to bridge the gap between ANNs and LNNs and provide a novel tool for monitoring and control in power systems. It is clearly shown that the proposed BIANN-based modeling and control schemes can provide faster and more accurate control for power system applications. The conclusions, the recommendations for future research, as well as the major contributions of this research are presented at the end.
APA, Harvard, Vancouver, ISO, and other styles
27

Araújo, Ricardo Matsumura de. "Memetic networks : problem-solving with social network models." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2010. http://hdl.handle.net/10183/25515.

Full text
Abstract:
Sistemas sociais têm se tornado cada vez mais relevantes para a Ciência da Computação em geral e para a Inteligência Artificial em particular. Tal interesse iniciou-se pela necessidade de analisar-se sistemas baseados em agentes onde a interação social destes agentes pode ter um impacto no resultado esperado. Uma tendência mais recente vem da área de Processamento Social de Informações, Computação Social e outros métodos crowdsourced, que são caracterizados por sistemas de computação compostos de pessoas reais, com um forte componente social na interação entre estas. O conjunto de todas interações sociais e os atores envolvidos compõem uma rede social, que pode ter uma forte influência em o quão eficaz ou eficiente o sistema pode ser. Nesta tese, exploramos o papel de estruturas de redes em sistemas sociais que visam a solução de problemas. Enquadramos a solução de problemas como uma busca por soluções válidas em um espaço de estados e propomos um modelo - a Rede Memética - que é capaz de realizar busca utilizando troca de informações (memes) entre atores interagindo em uma rede social. Tal modelo é aplicado a uma variedade de cenários e mostramos como a presença da rede social pode melhorar a capacidade do sistema em encontrar soluções. Adicionalmente, relacionamos propriedades específicas de diversas redes bem conhecidas ao comportamento observado para os algoritmos propostos, resultando em um conjunto de regras gerais que podem melhorar o desempenho de tais sistemas sociais. Por fim, mostramos que os algoritmos propostos são competitivos com técnicas tradicionais de busca heurística em diversos cenários.
Social systems are increasingly relevant to computer science in general and artificial intelligence in particular. Such interest was first sparkled by agent-based systems where the social interaction of such agents can be relevant to the outcome produced. A more recent trend comes from the general area of Social Information Processing, Social Computing and other crowdsourced systems, which are characterized by computing systems composed of people and strong social interactions between them. The set of all social interactions and actors compose a social network, which may have strong influence on how effective the system can be. In this thesis, we explore the role of network structure in social systems aiming at solving problems, focusing on numerical and combinatorial optimization. We frame problem solving as a search for valid solutions in a state space and propose a model - the Memetic Network - that is able to perform search by using the exchange of information, named memes, between actors interacting in a social network. Such model is applied to a variety of scenarios and we show that the presence of a social network greatly improves the system capacity to find good solutions. In addition, we relate specific properties of many well-known networks to the behavior displayed by the proposed algorithms, resulting in a set of general rules that may improve the performance of such social systems. Finally, we show that the proposed algorithms can be competitive with traditional heuristic search algorithms in a number of scenarios.
APA, Harvard, Vancouver, ISO, and other styles
28

Livi, Federico. "Supervised Learning with Graph Structured Data for Transprecision Computing." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amslaurea.unibo.it/19714/.

Full text
Abstract:
Nell'era dell'Internet of things, dei Big Data e dell'industria 4.0, la crescente richiesta di risorse e strumenti atti ad elaborare la grande quantità di dati e di informazioni disponibili in ogni momento, ha posto l'attenzione su problemi oramai non più trascurabili inerenti al consumo di energia e ai costi che ne derivano. Si tratta del cosiddetto powerwall, ovvero della difficoltà fisica dei macchinari di sostenere il consumo di potenza necessario per il processamento di moli di dati sempre più grandi e per l'esecuzione di task sempre più sofisticati. Tra le nuove tecniche che si sono affermate negli ultimi anni per tentare di arginare questo problema è importante citare la cosiddetta Transprecision Computing, approccio che si impegna a migliorare il consumo dell'energia a discapito della precisione. Infatti, tramite la riduzione di bit di precisione nelle operazioni di floating point, è possibile ottenere una maggiore efficienza energetica ma anche una decrescita non lineare della precisione di computazione. A seconda del dominio di applicazione, questo tradeoff può portare effettivamente ad importanti miglioramenti, ma purtroppo risulta ancora complesso trovare la precisione ottimale per tutte le variabili rispettando nel mentre un limite superiore relativo all'errore. In letteratura, questo problema è perciò affrontato utilizzando euristiche e metodologie che coinvolgono direttamente modelli di ottimizzazione e di machine learning. Nel presente elaborato, si cerca di migliorare ulteriormente questi approcci, introducendo nuovi modelli di machine learning basati anche sull'analisi di relazioni complesse tra le variabili. In questo senso, si arriva anche ad esaminare tecniche che lavorano direttamente su dati strutturati a grafo, tramite lo studio di reti neurali più complesse, le cosiddette graph convolutional networks.
APA, Harvard, Vancouver, ISO, and other styles
29

Fialho, Álvaro Roberto Silvestre. "Exploração de relações entre as técnicas nebulosas e evolutivas da inteligência computacional." Universidade de São Paulo, 2007. http://www.teses.usp.br/teses/disponiveis/3/3141/tde-26072007-173902/.

Full text
Abstract:
Neste trabalho foi realizada uma busca por relações, regras e transformações entre duas metodologias constituintes da Inteligência Computacional - a Computação Nebulosa e a Computação Evolutiva. Com a organização e sistematização da existência de tais transformações, obtém-se uma mudança na modelagem de soluções que as utilizam de forma conjunta, possibilitando que teorias e modelos bem estabelecidos em uma das metodologias possam ser aproveitados pela outra de uma forma mais robusta, correta por construção, intrínseca e transparente. Um modelo foi proposto para direcionar o trabalho de pesquisa. Através da análise desse modelo e de uma revisão bibliográfica realizada, transformações pontuais entre as metodologias foram elencadas, e posteriormente consolidadas por meio de experimentos práticos: uma Base de Conhecimento (BC) de um Controlador Lógico Nebuloso foi criada e modificada, conforme a necessidade, através de um Algoritmo Genético (AG). Com a abordagem desenvolvida, além da criação de BCs a partir de pouquíssimo conhecimento sobre o domínio do problema, tornou-se possível a inserção de novos \"comportamentos desejados\" em BCs já existentes, automaticamente, através de AGs. Os resultados desses experimentos, realizados sobre uma plataforma computacional especificada e implementada para este fim, foram apresentados e analisados.
This work addressed a search of relations, rules and transformations between two Computational Intelligence constituent methodologies - Fuzzy Computing and Evolutionary Computing. The existence of these relations changes the actual way of solutions modeling that uses these methodologies, allowing the utilization of well established theories and models of one technique by the other in a more robust, intrinsic and transparent way. Besides the research and systematization of points that indicate the existence of relations between the two methodologies, a model to guide these exploration was proposed. By this model analysis and by the bibliographic revision made, punctual transformations were pointed out, and further consolidated through practical experiments: a Knowledge Base (KB) of a Fuzzy Logic Controller was created and modified automatically by a Genetic Algorithm. With the developed approach, besides the creation of KBs, it became possible to automatically insert new \"desired behaviors\" to existent KBs. The results of such experiments, realized through a computational platform specified and implemented to this task, were presented and analyzed.
APA, Harvard, Vancouver, ISO, and other styles
30

Fountoukidis, Dimitrios P. "Adaptive management of emerging battlefield network." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2004. http://library.nps.navy.mil/uhtbin/hyperion/04Mar%5FFountoukidis.pdf.

Full text
Abstract:
Thesis (M.S. in Information Technology Management and M.S. in Modeling Virtual Environment and Simulation)--Naval Postgraduate School, March 2004.
Thesis advisor(s): Alex Bordetsky, John Hiles. Includes bibliographical references. Also available online.
APA, Harvard, Vancouver, ISO, and other styles
31

Martin, Olga J. "Retranslation a problem in computing with perceptions /." Diss., Online access via UMI:, 2008.

Find full text
Abstract:
Thesis (Ph. D.)--State University of New York at Binghamton, Thomas J. Watson School of Engineering and Applied Science, Department of Systems Science and Industrial Engineering, 2008.
Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
32

DeBruyne, Sandra DeBruyne. "Bio-Inspired Evolutionary Algorithms for Multi-Objective Optimization Applied to Engineering Applications." University of Toledo / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1542282067378143.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

SHERRON, CATHERINE ELIZABETH. "CRITICAL VALUES: FEMINIST PHILOSOPHY OF SCIENCE AND THE COMPUTING SCIENCES." University of Cincinnati / OhioLINK, 2003. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1054218563.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Yacoubi, Alya. "Vers des agents conversationnels capables de réguler leurs émotions : un modèle informatique des tendances à l’action." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLS378/document.

Full text
Abstract:
Les agents virtuels conversationnels ayant un comportement social reposent souvent sur au moins deux disciplines différentes : l’informatique et la psychologie. Dans la plupart des cas, les théories psychologiques sont converties en un modèle informatique afin de permettre aux agents d’adopter des comportements crédibles. Nos travaux de thèse se positionnent au croisement de ces deux champs disciplinaires. Notre objectif est de renforcer la crédibilité des agents conversationnels. Nous nous intéressons aux agents conversationnels orientés tâche, qui sont utilisés dans un contexte professionnel pour produire des réponses à partir d’une base de connaissances métier. Nous proposons un modèle affectif pour ces agents qui s’inspire des mécanismes affectifs chez l’humain. L’approche que nous avons choisie de mettre en œuvre dans notre modèle s’appuie sur la théorie des Tendances à l’Action en psychologie. Nous avons proposé un modèle des émotions en utilisant un formalisme inspiré de la logique BDI pour représenter les croyances et les buts de l’agent. Ce modèle a été implémenté dans une architecture d’agent conversationnel développée au sein de l’entreprise DAVI. Afin de confirmer la pertinence de notre approche, nous avons réalisé plusieurs études expérimentales. La première porte sur l’évaluation d’expressions verbales de la tendance à l’action. La deuxième porte sur l’impact des différentes stratégies de régulation possibles sur la perception de l’agent par l’utilisateur. Enfin, la troisième étude porte sur l’évaluation des agents affectifs en interaction avec des participants. Nous montrons que le processus de régulation que nous avons implémenté permet d’augmenter la crédibilité et le professionnalisme perçu des agents, et plus généralement qu’ils améliorent l’interaction. Nos résultats mettent ainsi en avant la nécessité de prendre en considération les deux mécanismes émotionnels complémentaires : la génération et la régulation des réponses émotionnelles. Ils ouvrent des perspectives sur les différentes manières de gérer les émotions et leur impact sur la perception de l’agent
Conversational virtual agents with social behavior are often based on at least two different disciplines : computer science and psychology. In most cases, psychological findings are converted into computational mechanisms in order to make agents look and behave in a believable manner. In this work, we aim at increasing conversational agents’ belivielibity and making human-agent interaction more natural by modelling emotions. More precisely, we are interested in task-oriented conversational agents, which are used as a custumer-relationship channel to respond to users request. We propose an affective model of emotional responses’ generation and control during a task-oriented interaction. Our proposed model is based, on one hand, on the theory of Action Tendencies (AT) in psychology to generate emotional responses during the interaction. On the other hand, the emotional control mechanism is inspired from social emotion regulation in empirical psychology. Both mechanisms use agent’s goals, beliefs and ideals. This model has been implemented in an agent architecture endowed with a natural language processing engine developed by the company DAVI. In order to confirm the relevance of our approach, we realized several experimental studies. The first was about validating verbal expressions of action tendency in a human-agent dialogue. In the second, we studied the impact of different emotional regulation strategies on the agent perception by the user. This study allowed us to design a social regulation algorithm based on theoretical and empirical findings. Finally, the third study focuses on the evaluation of emotional agents in real-time interactions. Our results show that the regulation process contributes in increasing the credibility and perceived competence of agents as well as in improving the interaction. Our results highlight the need to take into consideration of the two complementary emotional mechanisms : the generation and regulation of emotional responses. They open perspectives on different ways of managing emotions and their impact on the perception of the agent
APA, Harvard, Vancouver, ISO, and other styles
35

Ostheimer, Julia. "Human-in-the-loop Computing : Design Principles for Machine Learning Algorithms of Hybrid Intelligence." Thesis, Linnéuniversitetet, Institutionen för informatik (IK), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-94051.

Full text
Abstract:
Artificial intelligence (AI) is revolutionizing contemporary industries and being applied in application domains ranging from recommendation systems to self-driving cars. In scenarios in which humans are interacting with an AI, inaccurate algorithms could lead to human mistreatment or even harmful events. Human-in-the-loop computing is a machine learning approach desiring hybrid intelligence, the combination of human and machine intelligence, to achieve accurate and interpretable results. This thesis applies human-in-the-loop computing in a Design Science Research project with a Swedish manufacturing company to make operational processes more efficient. The thesis aims to investigate emerging design principles useful for designing machine learning algorithms of hybrid intelligence. Hereby, the thesis has two key contributions: First, a theoretical framework is built that comprises general design knowledge originating from Information Systems (IS) research. Second, the analysis of empirical findings leads to the review of general IS design principles and to the formulation of useful design principles for human-in-the-loop computing. Whereas the principle of AI-readiness improves the likelihood of strategical AI success, the principle of hybrid intelligence shows how useful it can be to trigger a demand for human-in-the-loop computing in involved stakeholders. The principle of use case-marketing might help designers to promote the customer benefits of applying human-in-the-loop computing in a research setting. By utilizing the principle of power relationship and the principle of human-AI trust, designers can demonstrate the humans’ power over AI and build a trusting human-machine relationship. Future research is encouraged to extend and specify the formulated design principles and employ human-in-the-loop computing in different research settings. With regard to technological advancements in brain-machine interfaces, human-in-the-loop computing might even become much more critical in the future.
APA, Harvard, Vancouver, ISO, and other styles
36

Kruppa, Michael. "Migrating characters: effective user guidance in instrumented environments." Berlin Aka, 2006. http://deposit.d-nb.de/cgi-bin/dokserv?id=2898568&prov=M&dok_var=1&dok_ext=htm.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Zhong, Christopher. "Modeling humans as peers and supervisors in computing systems through runtime models." Diss., Kansas State University, 2012. http://hdl.handle.net/2097/14047.

Full text
Abstract:
Doctor of Philosophy
Department of Computing and Information Sciences
Scott A. DeLoach
There is a growing demand for more effective integration of humans and computing systems, specifically in multiagent and multirobot systems. There are two aspects to consider in human integration: (1) the ability to control an arbitrary number of robots (particularly heterogeneous robots) and (2) integrating humans as peers in computing systems instead of being just users or supervisors. With traditional supervisory control of multirobot systems, the number of robots that a human can manage effectively is between four and six [17]. A limitation of traditional supervisory control is that the human must interact individually with each robot, which limits the upper-bound on the number of robots that a human can control effectively. In this work, I define the concept of "organizational control" together with an autonomous mechanism that can perform task allocation and other low-level housekeeping duties, which significantly reduces the need for the human to interact with individual robots. Humans are very versatile and robust in the types of tasks they can accomplish. However, failures in computing systems are common and thus redundancies are included to mitigate the chance of failure. When all redundancies have failed, system failure will occur and the computing system will be unable to accomplish its tasks. One way to further reduce the chance of a system failure is to integrate humans as peer "agents" in the computing system. As part of the system, humans can be assigned tasks that would have been impossible to complete due to failures.
APA, Harvard, Vancouver, ISO, and other styles
38

Littlefield, William Joseph II. "Abductive Humanism: Comparative Advantages of Artificial Intelligence and Human Cognition According to Logical Inference." Case Western Reserve University School of Graduate Studies / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=case1554480107736449.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Noble, Diego Vrague. "The impact of social context in social problem solving." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2013. http://hdl.handle.net/10183/115613.

Full text
Abstract:
Nossa incapacidade em compreender todos os fatores responsáveis por fenômenos naturais faz com que tenhamos que recorrer a simplificações na representação e na explicação destes. Por sua vez, a forma com que representamos e pensamos a respeito destes fenômenos é influenciada por fatores de natureza interna, como o nosso estado psicológico, ou então de natureza externa, como o ambiente social. Dentre os fatores externos, o ambiente social, ou contexto social, é um dos que tem maior influência na forma que pensamos e agimos. Quando estamos em grupo, mudamos a todo instante a forma com que resolvemos problemas em resposta ao contexto que nos cerca. Entretanto, esta característica até então foi pouco explorada em modelos computacionais de resolução coletiva de problemas. Este trabalho investiga o impacto do contexto social na resolução coletiva de problemas. Nós apresentaremos evidências de que o contexto social tem um papel importante na forma com que o grupo e o indivíduos se comportam. Mais precisamente, nós mostraremos que a centralidade de um indivíduo na rede social nem sempre é um bom preditor de sua contribuição quando o mesmo pode adaptar sua estratégia de busca em resposta ao contexto. Além disso, mostraremos que a adaptação ao contexto social por parte dos indivíduos pode melhorar o desempenho coletivo, facilitando a convergência para soluções boas; e que a diversidade de estratégias de resolução do problema não leva necessariamente a uma diversidade de soluções na população; e que, mesmo que o contexto social seja percebido da mesma forma pelos indivíduos, a forma com que eles reagem pode levar a diferentes resultados. Todos estes resultados suportam a ideia de que o contexto social deve ser considerado em experimentos com resolução social de problemas. Por fim, concluímos o trabalho discutindo o impactso do mesmo e apontando novos problemas a serem investigados.
Our inability to perceive and understand all the factors that account for real-world phenomena forces us to rely on clues when reasoning and making decisions about the world. Clues can be internal such as our psychological state and our motivations; or external, such as the resources available, the physical environment, the social environment, etc. The social environment, or social context, encompasses the set of relationships and cultural settings by which we interact and function in a society. Much of our thinking is influenced by the social environment and we constantly change the way we solve problems in response to our social environment. Nevertheless, this human trait has not been thoughtfully investigated by current computational models of human social problem-solving, for these models have lacked the heterogeneity and self-adaptive behavior observed in humans. In this work, we address this issue by investigating the impact of social context in social problem solving by means of extensive numerical simulations using a modified social model. We show evidences that social context plays a key role in how the system behaves and performs. More precisely, we show that the centrality of an agent in the network is an unreliable predictor the agent’s contribution when this agent can change its problem-solving strategy according to social context. Another finding is that social context information can be used to improve the convergence speed of the group to good solutions and that diversity in search strategies does not necessarily translates into diversity in solutions. We also determine that even if nodes perceive social context in same way, the way they react to it may lead to different outcomes along the search process. Together, these results contribute to the understanding that social context does indeed impact in social problem-solving. We conclude discussing the overall impact of this work and pointing future directions.
APA, Harvard, Vancouver, ISO, and other styles
40

Green, Robert C. II. "Novel Computational Methods for the Reliability Evaluation of Composite Power Systems using Computational Intelligence and High Performance Computing Techniques." University of Toledo / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1338894641.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Wild, Rafael. "Agências do artificial e do humano : uma análise de noções do humano na inteligência artificial a partir de perspectivas sociais e culturais." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2011. http://hdl.handle.net/10183/34146.

Full text
Abstract:
Esta tese analisa noções construídas sobre o ser humano que são apropriadas com fins tecnológicos, em sistemas computacionais produzidos por praticantes da Inteligência Artificial. Foi desenvolvido com base em um trabalho de campo de observação participante junto a grupos de pesquisa acadêmico na área de Inteligência Artificial, um brasileiro e outro europeu (português). O trabalho articula-se com as demandas da Informática na Educação ao focar, de maneira não estrita, projetos com caráter pedagógico. O presente estudo, através dos significados e as práticas observadas a partir de dentro dos grupos, procurou compreender o conhecimento do participante enquanto pertencente a uma cultura própria e peculiar, e a lógica interna desta cultura. Foram interrogados com especial atenção os artefatos produzidos: sistemas computacionais, investidos das características funcionais desejadas pelos participantes, e materializando suas práticas e premissas. Observou-se como emoção, conhecimento, cultura, e agência, entre outros, são conceituados, estabelecidos e colocados em práticas como categorias do humano, não apenas como definições expressas em texto, mas como materializadas em artefatos e em expectativas sobre o encontro entre estes artefatos e seus usuários. Foi consistentemente trabalhado o “colocar em perspectiva” das práticas e noções próprias do campo estudado, a partir de ferramentas teóricas propostas pelos Estudos de Ciência e Tecnologia, em especial por B. Latour, L. Suchman e D. Forsythe. As práticas e noções, no campo abordado, são conhecimento científico e tecnológico, com estatuto próprio e estabelecido como válido e legítimo; em relação a isto, foi sistematicamente buscada a colocação desta validade e desta legitimidade em perspectiva, mostrando como esta validade relaciona-se com a forma de produção e legitimação, e como esta produção e legitimação podem ser vistas de outras formas. Espera-se, com estes resultados, contribuir para um diálogo mais sofisticado dentro da Informática na Educação entre as práticas tecnológicas, a Ciência da Computação e Inteligência Artificial, e a aplicação social e pedagógica destas práticas.
This thesis addresses notions of human that are present in computer-based systems built by researchers in the area of Artificial Intelligence. Participant observation was performed in fieldwork within two academic research groups in Artificial Intelligence; one of such groups is Brazilian, while the other is Portuguese. The focus is on research projects displaying a pedagogical orientation. This thesis aims at understanding meanings and practices current in the groups, understood as local cultural settings, and the logics that underpin such meanings and practices. The technological artifacts that comprises their work, computer systems invested of certain functional characteristics, were interrogated. Categories such as emotion, knowledge, culture, and agency were followed as they are conceptualized and deployed as human traits, not only as textual definitions, but also as artefactual materializations and expectations about how users should encounter these artifacts. As a methodological analytics, these practices and notions were systematically compared with alternative perspectives, drawn from the theoretical references of the Science and Technological Studies (with special mention to B. Latour, L. Suchman and D. Forsythe). The validity and legitimacy of the positions of the group were not denied or devalued in this analytical process, but instead subjected to inquiry from different perspectives. The aims are making visible the relation of this validity and legitimacy with specific, situated processes of production and legitimation, and proposing that these processes could be considered in other, different ways.
APA, Harvard, Vancouver, ISO, and other styles
42

Orazi, Filippo. "Quantum machine learning: development and evaluation of the Multiple Aggregator Quantum Algorithm." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2022. http://amslaurea.unibo.it/25062/.

Full text
Abstract:
Human society has always been shaped by its technology, so much that even ages and parts of our history are often named after the discoveries of that time. The growth of modern society is largely derived from the introduction of classical computers that brought us innovations like repeated tasks automatization and long-distance communication. However, this explosive technological advancement could be subjected to a heavy stop when computers reach physical limitations and the empirical law known as Moore Law comes to an end. Foreshadowing these limits and hoping for an even more powerful technology, forty years ago the branch of quantum computation was born. Quantum computation uses at its advantage the same quantum effects that could stop the progress of traditional computation and aim to deliver hardware and software capable of even greater computational power. In this context, this thesis presents the implementation of a quantum variational machine learning algorithm called quantum single-layer perceptron. We start by briefly explaining the foundation of quantum computing and machine learning, to later dive into the theoretical approach of the multiple aggregator quantum algorithms, and finally deliver a versatile implementation of the quantum counterparts of a single hidden layer perceptron. To conclude we train the model to perform binary classification using standard benchmark datasets, alongside three baseline quantum machine learning models taken from the literature. We then perform tests on both simulated quantum hardware and real devices to compare the performances of the various models.
APA, Harvard, Vancouver, ISO, and other styles
43

Vinckier, Quentin. "Analog bio-inspired photonic processors based on the reservoir computing paradigm." Doctoral thesis, Universite Libre de Bruxelles, 2016. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/237069.

Full text
Abstract:
For many challenging problems where the mathematical description is not explicitly defined, artificial intelligence methods appear to be much more robust compared to traditional algorithms. Such methods share the common property of learning from examples in order to “explore” the problem to solve. Then, they generalize these examples to new and unseen input signals. The reservoir computing paradigm is a bio-inspired approach drawn from the theory of artificial Recurrent Neural Networks (RNNs) to process time-dependent data. This machine learning method was proposed independently by several research groups in the early 2000s. It has enabled a breakthrough in analog information processing, with several experiments demonstrating state-of-the-art performance for a wide range of hard nonlinear tasks. These tasks include for instance dynamic pattern classification, grammar modeling, speechrecognition, nonlinear channel equalization, detection of epileptic seizures, robot control, timeseries prediction, brain-machine interfacing, power system monitoring, financial forecasting, or handwriting recognition. A Reservoir Computer (RC) is composed of three different layers. There is first the neural network itself, called “reservoir”, which consists of a large number of internal variables (i.e. reservoir states) all interconnected together to exchange information. The internal dynamics of such a system, driven by a function of the inputs and the former reservoir states, is thus extremely rich. Through an input layer, a time-dependent input signal is applied to all the internal variables to disturb the neural network dynamics. Then, in the output layer, all these reservoir states are processed, often by taking a linear combination thereof at each time-step, to compute the output signal. Let us note that the presence of a non-linearity somewhere in the system is essential to reach high performance computing on nonlinear tasks. The principal novelty of the reservoir computing paradigm was to propose an RNN where most of the connection weights are generated randomly, except for the weights adjusted to compute the output signal from a linear combination of the reservoir states. In addition, some global parameters can be tuned to get the best performance, depending on the reservoir architecture and on the task. This simple and easy process considerably decreases the training complexity compared to traditional RNNs, for which all the weights needed to be optimized. RC algorithms can be programmed using modern traditional processors. But these electronic processors are better suited to digital processing for which a lot of transistors continuously need to be switched on and off, leading to higher power consumption. As we can intuitively understand, processors with hardware directly dedicated to RC operations – in otherwords analog bio-inspired processors – could be much more efficient regarding both speed and power consumption. Based on the same idea of high speed and low power consumption, the last few decades have seen an increasing use of coherent optics in the transport of information thanks to its high bandwidth and high power efficiency advantages. In order to address the future challenge of high performance, high speed, and power efficient nontrivial computing, it is thus natural to turn towards optical implementations of RCs using coherent light. Over the last few years, several physical implementations of RCs using optics and (opto)electronics have been successfully demonstrated. In the present PhD thesis, the reservoirs are based on a large coherently driven linear passive fiber cavity. The internal states are encoded by time-multiplexing in the cavity. Each reservoir state is therefore processed sequentially. This reservoir architecture exhibits many qualities that were either absent or not simultaneously present in previous works: we can perform analog optical signal processing; the easy tunability of each key parameter achieves the best operating point for each task; the system is able to reach a strikingly weak noise floor thanks to the absence of active elements in the reservoir itself; a richer dynamics is provided by operating in coherent light, as the reservoir states are encoded in both the amplitude and the phase of the electromagnetic field; high power efficiency is obtained as a result of the passive nature and simplicity of the setup. However, it is important to note that at this stage we have only obtained low optical power consumption for the reservoir itself. We have not tried to minimize the overall power consumption, including all control electronics. The first experiment reported in chapter 4 uses a quadratic non-linearity on each reservoir state in the output layer. This non-linearity is provided by a readout photodiode since it produces a current proportional to the intensity of the light. On a number of benchmark tasks widely used in the reservoir computing community, the error rates demonstrated with this RC architecture – both in simulation and experimentally – are, to our knowledge, the lowest obtained so far. Furthermore, the analytic model describing our experiment is also of interest, asit constitutes a very simple high performance RC algorithm. The setup reported in chapter 4 requires offline digital post-processing to compute its output signal by summing the weighted reservoir states at each time-step. In chapter 5, we numerically study a realistic model of an optoelectronic “analog readout layer” adapted on the setup presented in chapter 4. This readout layer is based on an RLC low-pass filter acting as an integrator over the weighted reservoir states to autonomously generate the RC output signal. On three benchmark tasks, we obtained very good simulation results that need to be confirmed experimentally in the future. These promising simulation results pave the way for standalone high performance physical reservoir computers.The RC architecture presented in chapter 5 is an autonomous optoelectronic implementation able to electrically generate its output signal. In order to contribute to the challenge of all-optical computing, chapter 6 highlights the possibility of processing information autonomously and optically using an RC based on two coherently driven passive linear cavities. The first one constitutes the reservoir itself and pumps the second one, which acts as an optical integrator onthe weighted reservoir states to optically generate the RC output signal after sampling. A sine non-linearity is implemented on the input signal, whereas both the reservoir and the readout layer are kept linear. Let us note that, because the non-linearity in this system is provided by a Mach-Zehnder modulator on the input signal, the input signal of this RC configuration needs to be an electrical signal. On the contrary, the RC implementation presented in chapter 5 processes optical input signals, but its output is electrical. We obtained very good simulation results on a single task and promising experimental results on two tasks. At the end of this chapter, interesting perspectives are pointed out to improve the performance of this challenging experiment. This system constitutes the first autonomous photonic RC able to optically generate its output signal.
Doctorat en Sciences de l'ingénieur et technologie
info:eu-repo/semantics/nonPublished
APA, Harvard, Vancouver, ISO, and other styles
44

Neto, Ary Fagundes Bressane. "Uma arquitetura para agentes inteligentes com personalidade e emoção." Universidade de São Paulo, 2010. http://www.teses.usp.br/teses/disponiveis/45/45134/tde-28072010-121443/.

Full text
Abstract:
Uma das principais motivações da Inteligência Artificial no contexto dos sistemas de entretenimento digital é criar personagens adaptáveis a novas situações, pouco previsíveis, com aprendizado rápido, memória de situações passadas e uma grande diversidade de comportamentos consistente e convincente ao longo do tempo. De acordo com recentes estudos desenvolvidos nos campos da Neurociência e da Psicologia, a capacidade de resolução de problemas não está unicamente atrelada à facilidade na manipulação de símbolos, mas também à exploração das características do ambiente e à interação social, que pode ser expressa na forma de fenômenos emocionais. Os resultados desses estudos confirmam o papel fundamental que cumprem a personalidade e as emoções nas atividades de percepção, planejamento, raciocínio, criatividade, aprendizagem, memória e tomada de decisão. Quando módulos para a manipulação de personalidade e emoções são incorporados à teoria de agentes, é possível a construção de Agentes com Comportamento Convincente (Believable Agents). O objetivo principal deste trabalho é desenvolver e implementar uma arquitetura de agentes inteligentes para construir personagens sintéticos cujos estados afetivos influenciam em suas atividades cognitivas. Para o desenvolvimento de tal arquitetura utilizou-se o modelo BDI (Beliefs, Desires e Intentions) como base e aos módulos existentes em uma implementação desse modelo foi incluído um Módulo Afetivo. Esse Módulo Afetivo é constituído por três submódulos (Personalidade, Humor e Emoção) e deve impactar nas atividades cognitivas de percepção, memória e tomada de decisão do agente. Duas provas de conceito (experimentos) foram construídas : a simulação do problema do ``Dilema do Prisioneiro Iterado\'\' e a versão computadorizada do ``Jogo da Memória\'\'. A construção desses experimentos permitiu avaliar empiricamente a influência da personalidade, humor e emoção nas atividades cognitivas dos agentes, e consequentemente no seu comportamento. Os resultados evidenciam que a utilização da nova arquitetura permite a construção de agentes com comportamentos mais coerentes, adaptativos e cooperativos quando comparados aos de agentes construídos com arquiteturas cujas atividades cognitivas não consideram o estado afetivo, e também produz um comportamento mais próximo de um agente humano que de um comportamento ótimo ou aleatório. Essa evidência de sucesso, apresentada nos resultados, mostra que os agentes construídos com a arquitetura proposta nessa dissertação indicam um avanço na direção do desenvolvimento dos Agentes com Comportamento Convincente.
One of the main motivations of Artificial Intelligence in the context of the digital entertainment systems is to create characters that are adaptable to new situations, unpredictable, fast learners, enable with memory of past situations and a variety of consistent and convincing behavior over time. According to recent studies conducted in the fields of Neuroscience and Psychology, the ability to solve problems is not only related to the capacity to manipulate symbols, but also to the ability to explore the environment and to engage into social interaction, which can be expressed as emotional phenomena. The results of these studies confirm the key role the personality and emotions play in the activities of perception, attention, planning, reasoning, creativity, learning, memory and decision making. When modules for handling personality and emotion, are incorporated in a theory of agents, it is possible to build Believable Agents. The main objective of this work is to develop and implement an intelligent agent architecture to build synthetic characters whose affective states influence their cognitive activities. To develop such architecture the BDI model (Beliefs, Desires and Intentions) was used as a basis, to which an Affective Module was included. The Affective Module consists of three sub-modules (Personality, Mood and Emotion), which influence the cognitive activities of perception, memory and decision making. Finally, two proofs of concept were built: the simulation of the problem of ``Iterated Prisoner\'s Dilemma\'\' and the computerized version of the ``Memory Game.\'\' The construction of these experiments allowed to evaluate empirically the influence of personality, mood and emotion in cognitive activities of agents and consequently in their behavior. The results show that using the proposed architecture one can build agents with more consistent, adaptive and cooperative behaviors when compared to agents built with architectures whose affective states do not influence their cognitive activities. It also produces a behavior that is closer to a human user than that of optimal or random behavior. This evidence of success, presented in the obtained results, show that agents built with the proposed architecture indicate an advance towards the development of Believable Agents.
APA, Harvard, Vancouver, ISO, and other styles
45

Ushie, Ogri James. "Intelligent optimisation of analogue circuits using particle swarm optimisation, genetic programming and genetic folding." Thesis, Brunel University, 2016. http://bura.brunel.ac.uk/handle/2438/13643.

Full text
Abstract:
This research presents various intelligent optimisation methods which are: genetic algorithm (GA), particle swarm optimisation (PSO), artificial bee colony algorithm (ABCA), firefly algorithm (FA) and bacterial foraging optimisation (BFO). It attempts to minimise analogue electronic filter and amplifier circuits, taking a cascode amplifier design as a case study, and utilising the above-mentioned intelligent optimisation algorithms with the aim of determining the best among them to be used. Small signal analysis (SSA) conversion of the cascode circuit is performed while mesh analysis is applied to transform the circuit to matrices form. Computer programmes are developed in Matlab using the above mentioned intelligent optimisation algorithms to minimise the cascode amplifier circuit. The objective function is based on input resistance, output resistance, power consumption, gain, upperfrequency band and lower frequency band. The cascode circuit result presented, applied the above-mentioned existing intelligent optimisation algorithms to optimise the same circuit and compared the techniques with the one using Nelder-Mead and the original circuit simulated in PSpice. Four circuit element types (resistors, capacitors, transistors and operational amplifier (op-amp)) are targeted using the optimisation techniques and subsequently compared to the initial circuit. The PSO based optimised result has proven to be best followed by that of GA optimised technique regarding power consumption reduction and frequency response. This work modifies symbolic circuit analysis in Matlab (MSCAM) tool which utilises Netlist from PSpice or from simulation to generate matrices. These matrices are used for optimisation or to compute circuit parameters. The tool is modified to handle both active and passive elements such as inductors, resistors, capacitors, transistors and op-amps. The transistors are transformed into SSA and op-amp use the SSA that is easy to implement in programming. Results are presented to illustrate the potential of the algorithm. Results are compared to PSpice simulation and the approach handled larger matrices dimensions compared to that of existing symbolic circuit analysis in Matlab tool (SCAM). The SCAM formed matrices by adding additional rows and columns due to how the algorithm was developed which takes more computer resources and limit its performance. Next to this, this work attempts to reduce component count in high-pass, low-pass, and all- pass active filters. Also, it uses a lower order filter to realise same results as higher order filter regarding frequency response curve. The optimisers applied are GA, PSO (the best two methods among them) and Nelder-Mead (the worst method) are used subsequently for the filters optimisation. The filters are converted into their SSA while nodal analysis is applied to transform the circuit to matrices form. High-pass, low-pass, and all- pass active filters results are presented to demonstrate the effectiveness of the technique. Results presented have shown that with a computer code, a lower order op-amp filter can be applied to realise the same results as that of a higher order one. Furthermore, PSO can realise the best results regarding frequency response for the three results, followed by GA whereas Nelder- Mead has the worst results. Furthermore, this research introduced genetic folding (GF), MSCAM, and automatically simulated Netlist into existing genetic programming (GP), which is a new contribution in this work, which enhances the development of independent Matlab toolbox for the evolution of passive and active filter circuits. The active filter circuit evolution especially when operational amplifier is involved as a component is of it first kind in circuit evolution. In the work, only one software package is used instead of combining PSpice and Matlab in electronic circuit simulation. This saves the elapsed time for moving the simulation between the two platforms and reduces the cost of subscription. The evolving circuit from GP using Matlab simulation is automatically transformed into a symbolic Netlist also by Matlab simulation. The Netlist is fed into MSCAM; where MSCAM uses it to generate matrices for the simulation. The matrices enhance frequency response analysis of low-pass, high-pass, band-pass, band-stop of active and passive filter circuits. After the circuit evolution using the developed GP, PSO is then applied to optimise some of the circuits. The algorithm is tested with twelve different circuits (five examples of the active filter, four examples of passive filter circuits and three examples of transistor amplifier circuits) and the results presented have shown that the algorithm is efficient regarding design.
APA, Harvard, Vancouver, ISO, and other styles
46

Rendo, Fernandez Jose Ignacio. "Semantic interoperability in ad-hoc computing environments." Thesis, Loughborough University, 2007. https://dspace.lboro.ac.uk/2134/3072.

Full text
Abstract:
This thesis introduces a novel approach in which multiple heterogeneous devices collaborate to provide useful applications in an ad-hoc network. This thesis proposes a smart home as a particular ubiquitous computing scenario considering all the requirements given by the literature for succeed in this kind of systems. To that end, we envision a horizontally integrated smart home built up from independent components that provide services. These components are described with enough syntactic, semantic and pragmatic knowledge to accomplish spontaneous collaboration. The objective of these collaboration is domestic use, that is, the provision of valuable services for home residents capable of supporting users in their daily activities. Moreover, for the system to be attractive for potential customers, it should offer high levels of trust and reliability, all of them not at an excessive price. To achieve this goal, this thesis proposes to study the synergies available when an ontological description of home device functionality is paired with a formal method. We propose an ad-hoc home network in which components are home devices modelled as processes represented as semantic services by means of the Web Service Ontology (OWL-S). In addition, such services are specified, verified and implemented by means of the Communicating Sequential Processes (CSP), a process algebra for describing concurrent systems. The utilisation of an ontology brings the desired levels of knowledge for a system to compose services in a ad-hoc environment. Services are composed by a goal based system in order to satisfy user needs. Such system is capable of understaning, both service representations and user context information. Furthermore, the inclusion of a formal method contributes with additional semantics to check that such compositions will be correctly implemented and executed, achieving the levels of reliability and costs reduction (costs derived form the design, development and implementation of the system) needed for a smart home to succeed.
APA, Harvard, Vancouver, ISO, and other styles
47

Huhtinen, J. (Jouni). "Utilization of neural network and agent technology combination for distributed intelligent applications and services." Doctoral thesis, University of Oulu, 2005. http://urn.fi/urn:isbn:9514278550.

Full text
Abstract:
Abstract The use of agent systems has increased enormously, especially in the field of mobile services. Intelligent services have also increased rapidly in the web. In this thesis, the utilization of software agent technology in mobile services and decentralized intelligent services in the multimedia business is introduced and described. Both Genie Agent Architecture (GAA) and Decentralized International and Intelligent Software Architecture (DIISA) are described. The common problems in decentralized software systems are lack of intelligence, communication of software modules and system learning. Another problem is the personalization of users and services. A third problem is the matching of users and service characteristics in web application level in a non-linear way. In this case it means that web services follow human steps and are capable of learning from human inputs and their characteristics in an intelligent way. This third problem is addressed in this thesis and solutions are presented with two intelligent software architectures and services. The solutions of the thesis are based on a combination of neural network and agent technology. To be more specific, solutions are based on an intelligent agent which uses certain black box information like Self-Organized Map (SOM). This process is as follows; information agents collect information from different sources like the web, databases, users, other software agents and the environment. Information is filtered and adapted for input vectors. Maps are created from a data entry of an SOM. Using maps is very simple, input forms are completed by users (automatically or manually) or user agents. Input vectors are formed again and sent to a certain map. The map gives several outputs which are passed through specific algorithms. This information is passed to an intelligent agent. The needs for web intelligence and knowledge representation serving users is a current issue in many business solutions. The main goal is to enable this by means of autonomous agents which communicate with each other using an agent communication language and with users using their native languages via several communication channels.
APA, Harvard, Vancouver, ISO, and other styles
48

Costa, Filipe de Oliveira 1987. "Atribuição de fonte em imagens provenientes de câmeras digitais." [s.n.], 2012. http://repositorio.unicamp.br/jspui/handle/REPOSIP/275692.

Full text
Abstract:
Orientador: Anderson de Rezende Rocha
Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação
Made available in DSpace on 2018-08-21T01:46:22Z (GMT). No. of bitstreams: 1 Costa_FilipedeOliveira_M.pdf: 4756629 bytes, checksum: db25cfc98fbdb67c2eee785a37969909 (MD5) Previous issue date: 2012
Resumo: Verificar a integridade e a autenticidade de imagens digitais é de fundamental importância quando estas podem ser apresentadas como evidência em uma corte de justiça. Uma maneira de se realizar esta verificação é identificar a câmera digital que capturou tais imagens. Neste trabalho, nós discutimos abordagens que permitem identificar se uma imagem sob investigação foi ou não capturada por uma determinada câmera digital. A pesquisa foi realizada segundo duas óticas: (1) verificação, em que o objetivo é verificar se uma determinada câmera, de fato, capturou uma dada imagem; e (2) reconhecimento, em que o foco é verificar se uma determinada imagem foi obtida por alguma câmera (se alguma) dentro de um conjunto limitado de câmeras e identificar, em caso afirmativo, o dispositivo específico que efetuou a captura. O estudo destas abordagens foi realizado considerando um cenário aberto (open-set), no qual nem sempre temos acesso a alguns dos dispositivos em questão. Neste trabalho, tratamos, também, do problema de correspondência entre dispositivos, em que o objetivo é verificar se um par de imagens foi gerado por uma mesma câmera. Isto pode ser útil para agrupar conjuntos de imagens de acordo com sua fonte quando não se possui qualquer informação sobre possíveis dispositivos de origem. As abordagens propostas apresentaram bons resultados, mostrando-se capazes de identificar o dispositivo específico utilizado na captura de uma imagem, e não somente sua marca
Abstract: Image's integrity and authenticity verification is paramount when it comes to a court of law. Just like we do in ballistics tests when we match a gun to its bullets, we can identify a given digital camera that acquired an image under investigation. In this work, we discussed approaches for identifying whether or not a given image under investigation was captured by a specific digital camera. We carried out the research under two vantage points: (1) verification, in which we are interested in verifying whether or not a given camera captured an image under investigation; and (2) recognition, in which we want to verify if an image was captured by a given camera (if any) from a pool of devices, and to point out such a camera. We performed this investigation considering an open set scenario, under which we can not rely on the assumption of full access to all of the investigated devices. We also tried to solve the device linking problem, where we aim at verifying if an image pair was generated by the same camera, without any information about the source of images. Our approaches reported good results, in terms of being capable of identifying the specific device that captured a given image including its model, brand, and even serial number
Mestrado
Ciência da Computação
Mestre em Ciência da Computação
APA, Harvard, Vancouver, ISO, and other styles
49

Longo, Eugenio. "AI e IoT: contesto e stato dell’arte." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2022.

Find full text
Abstract:
L’intelligenza artificiale costituisce un ramo dell’informatica che permette di programmare e progettare sistemi in grado di dotare le macchine di caratteristiche considerate tipicamente umane. L’internet delle cose rappresenta una rete di oggetti fisici in grado di connettersi e scambiare dati con altri dispositivi tramite internet. Nonostante L’internet delle cose e l’intelligenza artificiale rappresentino due concetti diversi, riescono ad integrarsi per creare nuove soluzioni con un elevato potenziale. L’utilizzo combinato di queste due tecnologie permette di aumentare il valore di entrambe le soluzioni, in quanto permette il conseguimento di dati e modelli predittivi. In questo elaborato di tesi l’obiettivo è stato quello di analizzare l’evoluzione dell’intelligenza artificiale e dell’innovazione tecnologica nella vita quotidiana. Nello specifico, l’intelligenza artificiale applicata all’internet delle cose ha preso piede nella gestione di grandi realtà come le smart city o smart mobility o nelle piccole realtà come le smart home, mettendo in rete una grande quantità di dati privati. Tuttavia, esistono ancora delle problematiche. Infatti, ad oggi non è stato ancora raggiunto il livello di sicurezza tale da poter utilizzare queste tecnologie in applicazioni più critiche. La sfida più grande nel mondo del lavoro sarà comprendere e saper sfruttare le potenzialità che il nuovo paradigma nell’utilizzo dell’intelligenza artificiale andrà a suggerire.
APA, Harvard, Vancouver, ISO, and other styles
50

Makasi, Tendai. "Cognitive computing systems and public value: The case of chatbots and public service delivery." Thesis, Queensland University of Technology, 2022. https://eprints.qut.edu.au/230002/1/Tendai_Makasi_Thesis.pdf.

Full text
Abstract:
This thesis is an investigation of how cognitive computing system initiatives in the public sector can contribute towards creating public value. It focuses specifically on public service delivery through service channels that are supported by chatbots and proposes recommendations to ensure that the important public service value dimensions are supported. The thesis builds upon the discussions around public value creation and draws upon the interpretation of how chatbots can facilitate public value creation during chatbot-mediated service interactions from both the users of the chatbots and designers of the chatbots.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography