Dissertations / Theses on the topic 'Generative AI'

To see the other types of publications on this topic, follow the link: Generative AI.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 32 dissertations / theses for your research on the topic 'Generative AI.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

TOMA, ANDREA. "PHY-layer Security in Cognitive Radio Networks through Learning Deep Generative Models: an AI-based approach." Doctoral thesis, Università degli studi di Genova, 2020. http://hdl.handle.net/11567/1003576.

Full text
Abstract:
Recently, Cognitive Radio (CR) has been intended as an intelligent radio endowed with cognition which can be developed by implementing Artificial Intelligence (AI) techniques. Specifically, data-driven Self-Awareness (SA) functionalities, such as detection of spectrum abnormalities, can be effectively implemented as shown by the proposed research. One important application is PHY-layer security since it is essential to establish secure wireless communications against external jamming attacks. In this framework, signals are non-stationary and features from such kind of dynamic spectrum, with multiple high sampling rate signals, are then extracted through the Stockwell Transform (ST) with dual-resolution which has been proposed and validated in this work as part of spectrum sensing techniques. Afterwards, analysis of the state-of-the-art about learning dynamic models from observed features describes theoretical aspects of Machine Learning (ML). In particular, following the recent advances of ML, learning deep generative models with several layers of non-linear processing has been selected as AI method for the proposed spectrum abnormality detection in CR for a brain-inspired, data-driven SA. In the proposed approach, the features extracted from the ST representation of the wideband spectrum are organized in a high-dimensional generalized state vector and, then, a generative model is learned and employed to detect any deviation from normal situations in the analysed spectrum (abnormal signals or behaviours). Specifically, conditional GAN (C-GAN), auxiliary classifier GAN (AC-GAN), and deep VAE have been considered as deep generative models. A dataset of a dynamic spectrum with multi-OFDM signals has been generated by using the National Instruments mm-Wave Transceiver which operates at 28 GHz (central carrier frequency) with 800 MHz frequency range. Training of the deep generative model is performed on the generalized state vector representing the mmWave spectrum with normality pattern without any malicious activity. Testing is based on new and independent data samples corresponding to abnormality pattern where the moving signal follows a different behaviour which has not been observed during training. An abnormality indicator is measured and used for the binary classification (normality hypothesis otherwise abnormality hypothesis), while the performance of the generative models is evaluated and compared through ROC curves and accuracy metrics.
APA, Harvard, Vancouver, ISO, and other styles
2

Misino, Eleonora. "Deep Generative Models with Probabilistic Logic Priors." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/24058/.

Full text
Abstract:
Many different extensions of the VAE framework have been introduced in the past. How­ ever, the vast majority of them focused on pure sub­-symbolic approaches that are not sufficient for solving generative tasks that require a form of reasoning. In this thesis, we propose the probabilistic logic VAE (PLVAE), a neuro-­symbolic deep generative model that combines the representational power of VAEs with the reasoning ability of probabilistic ­logic programming. The strength of PLVAE resides in its probabilistic ­logic prior, which provides an interpretable structure to the latent space that can be easily changed in order to apply the model to different scenarios. We provide empirical results of our approach by training PLVAE on a base task and then using the same model to generalize to novel tasks that involve reasoning with the same set of symbols.
APA, Harvard, Vancouver, ISO, and other styles
3

Mennborg, Alexander. "AI-Driven Image Manipulation : Image Outpainting Applied on Fashion Images." Thesis, Luleå tekniska universitet, Institutionen för system- och rymdteknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-85148.

Full text
Abstract:
The e-commerce industry frequently has to deal with displaying product images in a website where the images are provided by the selling partners. The images in question can have drastically different aspect ratios and resolutions which makes it harder to present them while maintaining a coherent user experience. Manipulating images by cropping can sometimes result in parts of the foreground (i.e. product or person within the image) to be cut off. Image outpainting is a technique that allows images to be extended past its boundaries and can be used to alter the aspect ratio of images. Together with object detection for locating the foreground makes it possible to manipulate images without sacrificing parts of the foreground. For image outpainting a deep learning model was trained on product images that can extend images by at least 25%. The model achieves 8.29 FID score, 44.29 PSNR score and 39.95 BRISQUE score. For testing this solution in practice a simple image manipulation pipeline was created which uses image outpainting when needed and it shows promising results. Images can be manipulated in under a second running on ZOTAC GeForce RTX 3060 (12GB) GPU and a few seconds running on a Intel Core i7-8700K (16GB) CPU. There is also a special case of images where the background has been digitally replaced with a solid color and they can be outpainted even faster without deep learning.
APA, Harvard, Vancouver, ISO, and other styles
4

Alabdallah, Abdallah. "Human Understandable Interpretation of Deep Neural Networks Decisions Using Generative Models." Thesis, Högskolan i Halmstad, Halmstad Embedded and Intelligent Systems Research (EIS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-41035.

Full text
Abstract:
Deep Neural Networks have long been considered black box systems, where their interpretability is a concern when applied in safety critical systems. In this work, a novel approach of interpreting the decisions of DNNs is proposed. The approach depends on exploiting generative models and the interpretability of their latent space. Three methods for ranking features are explored, two of which depend on sensitivity analysis, and the third one depends on Random Forest model. The Random Forest model was the most successful to rank the features, given its accuracy and inherent interpretability.
APA, Harvard, Vancouver, ISO, and other styles
5

PANFILO, DANIELE. "Generating Privacy-Compliant, Utility-Preserving Synthetic Tabular and Relational Datasets Through Deep Learning." Doctoral thesis, Università degli Studi di Trieste, 2022. http://hdl.handle.net/11368/3030920.

Full text
Abstract:
Due tendenze hanno rapidamente ridefinito il panorama dell'intelligenza artificiale (IA) negli ultimi decenni. La prima è il rapido sviluppo tecnologico che rende possibile un'intelligenza artificiale sempre più sofisticata. Dal punto di vista dell'hardware, ciò include una maggiore potenza di calcolo ed una sempre crescente efficienza di archiviazione dei dati. Da un punto di vista concettuale e algoritmico, campi come l'apprendimento automatico hanno subito un'impennata e le sinergie tra l'IA e le altre discipline hanno portato a sviluppi considerevoli. La seconda tendenza è la crescente consapevolezza della società nei confronti dell'IA. Mentre le istituzioni sono sempre più consapevoli di dover adottare la tecnologia dell'IA per rimanere competitive, questioni come la privacy dei dati e la possibilità di spiegare il funzionamento dei modelli di apprendimento automatico sono diventate parte del dibattito pubblico. L'insieme di questi sviluppi genera però una sfida: l'IA può migliorare tutti gli aspetti della nostra vita, dall'assistenza sanitaria alla politica ambientale, fino alle opportunità commerciali, ma poterla sfruttare adeguatamente richiede l'uso di dati sensibili. Purtroppo, le tecniche di anonimizzazione tradizionali non forniscono una soluzione affidabile a suddetta sfida. Non solo non sono sufficienti a proteggere i dati personali, ma ne riducono anche il valore analitico a causa delle inevitabili distorsioni apportate ai dati. Tuttavia, lo studio emergente dei modelli generativi ad apprendimento profondo (MGAP) può costituire un'alternativa più raffinata all'anonimizzazione tradizionale. Originariamente concepiti per l'elaborazione delle immagini, questi modelli catturano le distribuzioni di probabilità sottostanti agli insiemi di dati. Tali distribuzioni possono essere successivamente campionate, fornendo nuovi campioni di dati, non presenti nel set di dati originale. Tuttavia, la distribuzione complessiva degli insiemi di dati sintetici, costituiti da dati campionati in questo modo, è equivalente a quella del set dei dati originali. In questa tesi, verrà analizzato l'uso dei MGAP come tecnologia abilitante per una più ampia adozione dell'IA. A tal scopo, verrà ripercorsa prima di tutto la legislazione sulla privacy dei dati, con particolare attenzione a quella relativa all'Unione Europea. Nel farlo, forniremo anche una panoramica delle tecnologie tradizionali di anonimizzazione dei dati. Successivamente, verrà fornita un'introduzione all'IA e al deep-learning. Per illustrare i meriti di questo campo, vengono discussi due casi di studio: uno relativo alla segmentazione delle immagini ed uno reltivo alla diagnosi del cancro. Si introducono poi i MGAP, con particolare attenzione agli autoencoder variazionali. L'applicazione di questi metodi ai dati tabellari e relazionali costituisce una utile innovazione in questo campo che comporta l’introduzione di tecniche innovative di pre-elaborazione. Infine, verrà valutata la metodologia sviluppata attraverso esperimenti riproducibili, considerando sia l'utilità analitica che il grado di protezione della privacy attraverso metriche statistiche.
Two trends have rapidly been redefining the artificial intelligence (AI) landscape over the past several decades. The first of these is the rapid technological developments that make increasingly sophisticated AI feasible. From a hardware point of view, this includes increased computational power and efficient data storage. From a conceptual and algorithmic viewpoint, fields such as machine learning have undergone a surge and synergies between AI and other disciplines have resulted in considerable developments. The second trend is the growing societal awareness around AI. While institutions are becoming increasingly aware that they have to adopt AI technology to stay competitive, issues such as data privacy and explainability have become part of public discourse. Combined, these developments result in a conundrum: AI can improve all aspects of our lives, from healthcare to environmental policy to business opportunities, but invoking it requires the use of sensitive data. Unfortunately, traditional anonymization techniques do not provide a reliable solution to this conundrum. They are insufficient in protecting personal data, but also reduce the analytic value of data through distortion. However, the emerging study of deep-learning generative models (DLGM) may form a more refined alternative to traditional anonymization. Originally conceived for image processing, these models capture probability distributions underlying datasets. Such distributions can subsequently be sampled, giving new data points not present in the original dataset. However, the overall distribution of synthetic datasets, consisting of data sampled in this manner, is equivalent to that of the original dataset. In our research activity, we study the use of DLGM as an enabling technology for wider AI adoption. To do so, we first study legislation around data privacy with an emphasis on the European Union. In doing so, we also provide an outline of traditional data anonymization technology. We then provide an introduction to AI and deep-learning. Two case studies are discussed to illustrate the field’s merits, namely image segmentation and cancer diagnosis. We then introduce DLGM, with an emphasis on variational autoencoders. The application of such methods to tabular and relational data is novel and involves innovative preprocessing techniques. Finally, we assess the developed methodology in reproducible experiments, evaluating both the analytic utility and the degree of privacy protection through statistical metrics.
APA, Harvard, Vancouver, ISO, and other styles
6

Hagström, Adrian, and Rustam Stanikzai. "Writer identification using semi-supervised GAN and LSR method on offline block characters." Thesis, Högskolan i Halmstad, Akademin för informationsteknologi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-43316.

Full text
Abstract:
Block characters are often used when filling out forms, for example when writing ones personal number. The question of whether or not there is recoverable, biometric (identity related) information within individual digits of hand written personal numbers is then relevant. This thesis investigates the question by using both handcrafted features and extracting features via Deep learning (DL) models, and successively limiting the amount of available training samples. Some recent works using DL have presented semi-supervised methods using Generative adveserial network (GAN) generated data together with a modified Label smoothing regularization (LSR) function. Using this training method might improve performance on a baseline fully supervised model when doing authentication. This work additionally proposes a novel modified LSR function named Bootstrap label smooting regularizer (BLSR) designed to mitigate some of the problems of previous methods, and is compared to the others. The DL feature extraction is done by training a ResNet50 model to recognize writers of a personal numbers and then extracting the feature vector from the second to last layer of the network.Results show a clear indication of recoverable identity related information within the hand written (personal number) digits in boxes. Our results indicate an authentication performance, expressed in Equal error rate (EER), of around 25% with handcrafted features. The same performance measured in EER was between 20-30% when using the features extracted from the DL model. The DL methods, while showing potential for greater performance than the handcrafted, seem to suffer from fluctuation (noisiness) of results, making conclusions on their use in practice hard to draw. Additionally when using 1-2 training samples the handcrafted features easily beat the DL methods.When using the LSR variant semi-supervised methods there is no noticeable performance boost and BLSR gets the second best results among the alternatives.
APA, Harvard, Vancouver, ISO, and other styles
7

Santiago, Dionny. "A Model-Based AI-Driven Test Generation System." FIU Digital Commons, 2018. https://digitalcommons.fiu.edu/etd/3878.

Full text
Abstract:
Achieving high software quality today involves manual analysis, test planning, documentation of testing strategy and test cases, and development of automated test scripts to support regression testing. This thesis is motivated by the opportunity to bridge the gap between current test automation and true test automation by investigating learning-based solutions to software testing. We present an approach that combines a trainable web component classifier, a test case description language, and a trainable test generation and execution system that can learn to generate new test cases. Training data was collected and hand-labeled across 7 systems, 95 web pages, and 17,360 elements. A total of 250 test flows were also manually hand-crafted for training purposes. Various machine learning algorithms were evaluated. Results showed that Random Forest classifiers performed well on several web component classification problems. In addition, Long Short-Term Memory neural networks were able to model and generate new valid test flows.
APA, Harvard, Vancouver, ISO, and other styles
8

Olsen, Linnéa. "Can Chatbot technologies answer work email needs? : A case study on work email needs in an accounting firm." Thesis, Karlstads universitet, Handelshögskolan (from 2013), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-85013.

Full text
Abstract:
Work email is one of the organisations most critical tool today. It`s have become a standard way to communicate internally and externally. It can also affect our well-being. Email overload has become a well-known issue for many people. With interviews, follow up interviews, and a workshop, three persons from an accounting firm prioritise pre-define emails needs. And identified several other email needs that were added to the priority list. A thematic analysis and summarizing of a Likert scale was conducted to identify underlying work email needs and work email needs that are not apparent. Three work email needs were selected and using scenario-based methods and the elements of PACT to investigating how the characteristics of a chatbot can help solve the identified work email overload issue? The result shows that email overload is percept different from individual to individual. The choice of how email is handled and email activities indicate how email overload feeling is experienced. The result shows a need to get a sense of the email content quickly, fast collect financial information and information from Swedish authorities, and repetitive, time-consuming tasks. Suggestions on how this problem can be solved have been put forward for many years, and how to use machine learning to help reduce email overload. However, many of these proposed solutions have not yet been implemented on a full scale. One conclusion may be that since email overload is not experienced in the same way, individuals have different needs - One solution does not fit all. With the help of the character of a chatbot, many problems can be solved. And with a technological character of a chatbot that can learn individuals' email patterns, suggest email task to the user and performing tasks to reducing the email overload perception. Using keyword for email intents to get a sense of the email content faster and produce quick links where to find information about the identified subject. And to work preventive give the user remainder and perform repetitive tasks on specific dates.
APA, Harvard, Vancouver, ISO, and other styles
9

Goldstein, ép Lejuste Déborah. "La transformation numérique des TPE/PME traditionnelles comme catalyseur du développement économique territorial : enjeux et impacts socio- économiques." Electronic Thesis or Diss., Limoges, 2024. http://www.theses.fr/2024LIMO0023.

Full text
Abstract:
La révolution induite par la transformation numérique des très petites entreprises (TPE) et des petites et moyennes entreprises (PME) est sans précédent, impulsée par l'émergence rapide des technologies numériques. Cette mutation dépasse largement la simple modernisation des outils ; elle engendre un changement profond dans la manière dont ces entreprises interagissent avec leur environnement économique et social. Cette thèse soutient que cette transformation constitue un processus stratégique intégrant de manière exhaustive et novatrice les technologies digitales dans tous les aspects d'une organisation. Elle explore en profondeur comment la transformation numérique des TPE/PME traditionnelles peut agir comme un catalyseur du développement économique territorial, en analysant ses enjeux et impacts socio-économiques. En combinant des approches qualitative et quantitative, elle aborde la problématique sous plusieurs angles, incluant les dimensions organisationnelles, managériales et territoriales. Structurée autour de quatre axes d'analyse distincts, cette thèse par articles examine l'aspect stratégique de la transformation numérique et son rôle dans la résilience territoriale, la gestion des externalités générées par cette transformation, l'évolution du rôle du dirigeant, et l'impact de l'utilisation de l'IA générative dans la gestion des données et la prise de décision. Au-delà des résultats individuels des articles, plusieurs conclusions transversales émergent de la recherche, mettant en lumière l'importance croissante du numérique pour les TPE/PME traditionnelles, tout en soulignant la nécessité d'une approche équilibrée entre outils numériques et interactions humaines. En intégrant une évolution numérique de la théorie de la construction stratégique par le chef d’entreprise développée par Henry Mintzberg, cette thèse met en avant des recommandations pour les dirigeants et les institutions. Ces recommandations visent à promouvoir la culture numérique, faciliter la collaboration et offrir un soutien personnalisé pour la mise en œuvre de la transformation numérique au sein des entreprises. Notamment, cette approche, en mettant en évidence une perception différente du numérique et de la transformation numérique dans l'entreprise, favorise le développement d'écosystèmes mis en perspective avec l’attractivité territoriale. Enfin, cette thèse aspire à aider les acteurs économiques et institutionnels à naviguer avec succès dans l'ère numérique, en intégrant les principes de sobriété numérique et de responsabilité environnementale dans leurs stratégies, tout en favorisant l'innovation et la compétitivité
The revolution induced by the digital transformation of very small enterprises (VSEs) and small and medium-sized enterprises (SMEs) is unprecedented, driven by the rapid emergence of digital technologies. This change goes far beyond mere tool modernization; it brings about a profound shift in how these businesses interact with their economic and social environment. This thesis argues that this transformation constitutes a strategic process that comprehensively and innovatively integrates digital technologies into all aspects of an organization. It delves into how the digital transformation of traditional VSEs/SMEs can act as a catalyst for territorial economic development, analyzing its socio-economic issues and impacts. By combining qualitative and quantitative approaches, it addresses the issue from various angles, including organizational, managerial, and territorial dimensions. Structured around four distinct axes of analysis, this thesis through articles examines the strategic aspect of digital transformation and its role in territorial resilience, the management of externalities generated by this transformation, the evolution of the role of leadership, and the impact of using generative AI in data management and decision-making. Beyond the individual findings of the articles, several cross-cutting conclusions emerge from the research, highlighting the growing importance of digitalization for traditional VSEs/SMEs while underscoring the need for a balanced approach between digital tools and human interactions. By integrating a digital evolution of the theory of strategic construction by the business leader developed by Henry Mintzberg, this thesis puts forward recommendations for leaders and institutions. These recommendations aim to promote digital culture, facilitate collaboration, and provide personalized support for the implementation of digital transformation within businesses. Importantly, this approach, by highlighting a different perception of digitalization and digital transformation within the company, fosters the development of ecosystems in perspective with territorial attractiveness. Finally, this thesis aims to assist economic and institutional actors in successfully navigating the digital era by integrating principles of digital sobriety and environmental responsibility into their strategies, while fostering innovation and competitiveness
APA, Harvard, Vancouver, ISO, and other styles
10

Ko, Kai-Chung. "Protocol test sequence generation and analysis using AI techniques." Thesis, University of British Columbia, 1990. http://hdl.handle.net/2429/29192.

Full text
Abstract:
This thesis addresses two major issues in protocol conformance testing: test sequence generation and test result analysis. For test sequence generation, a new approach based on the constraint satisfaction problem (CSP) techniques, which is widely used in the AI community, is presented. This method constructs a unique test sequence for a given FSM by using an initial test sequence, such as a transition tour or an UIO test sequence, and incrementally generating a set of test subsequences which together represent the constraints imposed on the overall structure of the FSM. The new method not only generates test sequence with fault coverage which is at least as good as the one provided by the existing methods, but also allows the implementation under test (IUT) to have a larger number of states than that in the specification. In addition, the new method also lends itself naturally to both test result analysis and fault coverage measurement. For test result analysis, the CSP method uses the observed sequence as the initial sequence, constructs all fault models which satisfy the initial sequence and introduces additional subsequences to pinpoint the IUT fault model. In addition, a second method for test result analysis is proposed, which is originated from a model of diagnostic reasoning from first principle, another well-known AI techniques which produces all minimal diagnoses by considering the overall consistency of the system together with the observation. Unlike the first method, the second method does not require the computation of all fault models explicitly, and hence is considered to be more suitable for large systems. To our knowledge, the proposed methods in this thesis represent the first attempt in applying AI techniques to the problem of protocol test sequence generation and analysis.
Science, Faculty of
Computer Science, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
11

Strineholm, Philippe. "Exploring Human-Robot Interaction Through Explainable AI Poetry Generation." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-54606.

Full text
Abstract:
As the field of Artificial Intelligence continues to evolve into a tool of societal impact, a need of breaking its initial boundaries as a computer science discipline arises to also include different humanistic fields. The work presented in this thesis revolves around the role that explainable artificial intelligence has in human-robot interaction through the study of poetry generators. To better understand the scope of the project, a poetry generators study presents the steps involved in the development process and the evaluation methods. In the algorithmic development of poetry generators, the shift from traditional disciplines to transdisciplinarity is identified. In collaboration with researchers from the Research Institutes of Sweden, state-of-the-art generators are tested to showcase the power of artificially enhanced artifacts. A development plateau is discovered and with the inclusion of Design Thinking methods potential future human-robot interaction development is identified. A physical prototype capable of verbal interaction on top of a poetry generator is created with the new feature of changing the corpora to any given audio input. Lastly, the strengths of transdisciplinarity are connected with the open-sourced community in regards to creativity and self-expression, producing an online tool to address future work improvements and introduce nonexperts to the steps required to self-build an intelligent robotic companion, thus also encouraging public technological literacy. Explainable AI is shown to help with user involvement in the process of creation, alteration and deployment of AI enhanced applications.
APA, Harvard, Vancouver, ISO, and other styles
12

Yannakakis, Georgios N. "AI in computer games : generating interesting interactive opponents by the use of evolutionary computation." Thesis, University of Edinburgh, 2005. http://hdl.handle.net/1842/879.

Full text
Abstract:
Which features of a computer game contribute to the player’s enjoyment of it? How can we automatically generate interesting and satisfying playing experiences for a given game? These are the two key questions addressed in this dissertation. Player satisfaction in computer games depends on a variety of factors; here the focus is on the contribution of the behaviour and strategy of game opponents in predator/prey games. A quantitative metric of the ‘interestingness’ of opponent behaviours is defined based on qualitative considerations of what is enjoyable in such games, and a mathematical formulation grounded in observable data is derived. Using this metric, neural-network opponent controllers are evolved for dynamic game environments where limited inter-agent communication is used to drive spatial coordination of opponent teams. Given the complexity of the predator task, cooperative team behaviours are investigated. Initial candidates are generated using off-line learning procedures operating on minimal neural controllers with the aim of maximising opponent performance. These example controllers are then adapted using on-line (i.e. during play) learning techniques to yield opponents that provide games of high interest. The on-line learning methodology is evaluated using two dissimilar predator/prey games with a number of different computer player strategies. It exhibits generality across the two game test-beds and robustness to changes of player, initial opponent controller selected, and complexity of the game field. The interest metric is also evaluated by comparison with human judgement of game satisfaction in an experimental survey. A statistically significant number of players were asked to rank game experiences with a test-bed game using perceived interestingness and their ranking was compared with that of the proposed interest metric. The results show that the interest metric is consistent with human judgement of game satisfaction. Finally, the generality, limitations and potential of the proposed methodology and techniques are discussed, and other factors affecting the player’s satisfaction, such as the player’s own strategy, are briefly considered. Future directions building on the work described herein are presented and discussed.
APA, Harvard, Vancouver, ISO, and other styles
13

Thörn, Oscar. "AI Drummer - Using Learning to EnhanceArti cial Drummer Creativity." Thesis, Örebro universitet, Institutionen för naturvetenskap och teknik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-90637.

Full text
Abstract:
This project explores the usability of Transformers for learning a model that canplay the drums and accompany a human pianist. Building upon previous workusing fuzzy logic systems three experiments are devised to test the usabilityof Transformers. The report also includes a brief survey of algorithmic musicgeneration.The result of the project are that in their current form Transformers cannoteasily learn collaborative music generation. The key insights is that a new wayto encode sequences are needed for collaboration between human and robot inthe music domain. This encoding should be able to handle the varied demandsand lengths of di erent musical instruments.
APA, Harvard, Vancouver, ISO, and other styles
14

Sahlin, Jesper, and Victor Olsson. "A Smart Terrain based model for generating behavioural patterns." Thesis, Malmö högskola, Fakulteten för teknik och samhälle (TS), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-20441.

Full text
Abstract:
I denna uppsats kommer vi att presentera en modell vars syfte är att generera beteende-mönster för rollfigurer i digitala spel. Spelgenren rollspel (eng. Role-playing games) placerarspelaren i en värld fylld av fantastiska monster och modiga hjältar. I ett sådant spel ärde goda karaktärerna minst lika viktiga som de ondskefulla varelser som spelaren kämparmot. Hur ser livet ut för en spelkaraktär när den inte hjälper spelaren på dess äventyr? Dekanske lever som fiskare på havet eller som bönder ute på fältet. Mer troligt är att de bori en by tillsammans med en massa andra spännande karaktärer. Vi undersöker hur sådanakaraktärers vardag ser ut och tittar på en teknik som används för att skapa deras beteen-demönster, Cyclic Scheduling. Tekniken innebär att utvecklare skapar scheman som styrrollfigurernas beteende. Dessa scheman måste skapas i förväg under spelets utveckling ochkräver i stora spel många arbetstimmar för att utvecklas. Modellen vi presenterar i dennauppsats använder tekniken Smart Terrain för att automatiskt generera beteendemönsteroch kan användas för att minska utvecklingstid. Vi diskuterar hur modellen kan användas iföränderliga spelvärldar där utvecklare inte i förhand vet hur spelens omgivningar kommeratt se ut.
In this thesis we present a model for the generation of behaviour patterns for charactersin digital games. In the genre Role-playing games the player is placed in a world filledwith fantastic monsters and brave heroes. In this kind of game the good characters areas important as the evil creatures the player must fight against. What kind of life doesa game character have when not helping the player on adventures? Maybe they live asfishermen on the sea or as farmers in the fields. More likely they live in villages amongstother exciting game characters. We examine what these characters’ daily routines looks likeand look at a technique used for creating their behaviour patterns, Cyclic Scheduling. Thetechnique is used by developers to create schedules that control the behaviour of charactersin games. These schedules have to be created during the game development process andfor bigger games this consumes a lot of time. The model we presents in this thesis uses thetechnique Smart Terrain to automatically generate behaviour patterns thereby reducingthe development time. We discuss how the model can be used in dynamic game worldswhere the developers are unaware of potential changes in the game world.
APA, Harvard, Vancouver, ISO, and other styles
15

GUPTA, NIDHI. "AUTOMATIC GENERATION CONTROL OF INTERCONNECTED MULTI AREA POWER SYSTEM." Thesis, DELHI TECHNOLOGICAL UNIVERSITY, 2021. http://dspace.dtu.ac.in:8080/jspui/handle/repository/18414.

Full text
Abstract:
Currently, power system operation and control with AGC are undergoing fundamental changes due to rapidly increasing amount of renewable sources, energy storage system, restructuring and emerging of new types of power generation, consumption and power electronics technologies. Continuous growth in size and complexity, stochastically changing power demands, system modeling errors, alterations in electric power system structures and variations in the system parameters over the time has turned AGC task into a challenging one. Infrastructure of the intelligent power system should effectively support the provision of auxiliary services such as an AGC system from various sources through intelligent schemes. Literature survey shows that performance of AGC of interconnected power system with diverse sources gets improved by changing in controller structure, using intelligent optimization techniques for controller parameters, adding storage system and by considering different participation of diverse sources in multi area power systems. Hence, proposing and implementing new controller approaches using high performance heuristic optimization algorithms to real world problems are always welcomed. Performance of many controllers depends on proper selection of certain algorithms and specific control parameters. Hence, the goal of the present study is to propose different types of new supplementary controller to achieve better dynamic performances in multi-area with diverse source power systems, namely two area power system with and without non-linearity and three area power system with optimal and energy storage system. Based on the extensive literature review on the control designs of AGC of interconnected power system, it has been felt that new control techniques for design of AGC regulators for interconnected power system including vi renewable sources. The main objective of the proposed research work is to design new AGC regulators and develop simple, robust and easy to implement as compared with the available control techniques. The problem of nonlinearity in interconnected power system with diverse sources has also been addressed with suitable control algorithms. The presented work is divided into nine chapters. Chapter 1 deals with the introduction of AGC of power system. Widespread review of the taxonomy of optimization algorithms is presented in this chapter. Chapter 2 presents a critical review of AGC schemes in interconnected multi area power system with diverse sources. Chapter 3 stresses on the modelling of diverse sources power systems under consideration. The main simulation work starts from Chapter 4. In Chapter 4, the study is firstly conducted to propose novel Jaya based AGC of two area interconnected thermal-hydro- gas power system with varying participation of sources. In Chapter 5, novel Jaya based AI technique is further employed on realistic power system by considering non linearities like Governor Dead band (GDB), Generation Rate Constraint (GRC) and Boiler dynamics. The study is done on Jaya based AGC of two area interconnected thermal-hydro-wind and thermal-hydro-diesel power system with and without nonlinearities by considering step load and random perturbation at different control areas. In Chapter 6, designing of Optimal AGC regulator for three different three-area interconnected multi source power systems has been planned. In each power system, optimal AGC regulators have been designed by using different structures of cost weighting matrices (Q an R). vii In Chapter 7, implementation of Superconducting Magnetic Energy Storage System (SMES) in operation and control of AGC of three-area multi source power systems has been studied. Analysis of PSO tuned Integral controller for AGC of three area interconnected multi source power systems with and without SMES by considering step load perturbation at different control areas has bee done. Comparative performance of different bio-inspired artificial technique has been presented on AGC of three area interconnected power system with SMES. Chapter 8, presents AGC of three area multi source interconnected power systems by including and excluding Battery Energy Storage System (BESS) at step load perturbation in different control areas. In Chapter 9 - the performance of different control techniques presented for AGC of multi area interconnected multi source power system has been summarized and the scope of further work in this area has been highlighted.
APA, Harvard, Vancouver, ISO, and other styles
16

POCHET, AXELLE DANY JULIETTE. "MODELING OF GEOBODIES: AI FOR SEISMIC FAULT DETECTION AND ALL-QUADRILATERAL MESH GENERATION." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2018. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=35861@1.

Full text
Abstract:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
PROGRAMA DE EXCELENCIA ACADEMICA
A exploração segura de reservatórios de petróleo necessita uma boa modelagem numérica dos objetos geológicos da sub superfície, que inclui entre outras etapas: interpretação sísmica e geração de malha. Esta tese apresenta um estudo nessas duas áreas. O primeiro estudo é uma contribuição para interpretação de dados sísmicos, que se baseia na detecção automática de falhas sísmicas usando redes neurais profundas. Em particular, usamos Redes Neurais Convolucionais (RNCs) diretamente sobre mapas de amplitude sísmica, com a particularidade de usar dados sintéticos para treinar a rede com o objetivo final de classificar dados reais. Num segundo estudo, propomos um novo algoritmo para geração de malhas bidimensionais de quadrilaterais para estudos geomecânicos, baseado numa abordagem inovadora do método de quadtree: definimos novos padrões de subdivisão para adaptar a malha de maneira eficiente a qualquer geometria de entrada. As malhas obtidas podem ser usadas para simulações com o Método de Elementos Finitos (MEF).
Safe oil exploration requires good numerical modeling of the subsurface geobodies, which includes among other steps: seismic interpretation and mesh generation. This thesis presents a study in these two areas. The first study is a contribution to data interpretation, examining the possibilities of automatic seismic fault detection using deep learning methods. In particular, we use Convolutional Neural Networks (CNNs) on seismic amplitude maps, with the particularity to use synthetic data for training with the goal to classify real data. In the second study, we propose a new two-dimensional all-quadrilateral meshing algorithm for geomechanical domains, based on an innovative quadtree approach: we define new subdivision patterns to efficiently adapt the mesh to any input geometry. The resulting mesh is suited for Finite Element Method (FEM) simulations.
APA, Harvard, Vancouver, ISO, and other styles
17

Jesus, Diana Vaz de. "Apropriação e inserção na contra-arte da geração AI-5 /." São Paulo : [s.n.], 2010. http://hdl.handle.net/11449/86892.

Full text
Abstract:
Orientador: José Leonardo do Nascimento
Banca: Sérgio Romagnolo
Banca: Francisco Cabral Alambert Junior
Resumo: Este trabalho tem como objeto de pesquisa obras de artistas brasileiros, que nas décadas de 1960 e 1970 produziram seus trabalhos utilizando como procedimento artístico a apropriação e também a (re) inserção de suas obras no cotidiano. Dentre os que mais se destacaram nesta prática no período estudado, selecionaram-se para análise os artistas: Nelson Leirner, Cildo Meireles e Antonio Manuel. Definindo melhor o conceito de apropriação e retomando a história desse procedimento na arte foi possível fazer um levantamento de questões que este tipo de prática aponta e as mudanças de paradigma que ocasionou. Desde os readymades de Duchamp aos détournement dos situacionistas, a prática da apropriação problematizou questões como a autoria do artista e a própria natureza da arte. Como o suporte de tais obras são objetos do cotidiano, foi também analisada a questão do que definiria um objeto apropriado como obra de arte, tendo como base teórica a definição de arte de Arthur C. Danto (A Transfiguração do lugar-comum). Partindo para a arte brasileira, buscaram-se as primeiras manifestações desta prática para verificar sua adaptação aos ideais de arte nacional. Retomando o contexto sócio-políticoeconômico dos anos 1960 e 1970, em particular o período do AI-5, foi possível analisar se tal contexto influenciou na criação das obras dos artistas analisados. Tendo como referência o termo 'contra-arte' - cunhado pelo crítico Frederico Morais para referir-se à arte da geração AI-5 - verificou-se que esses artistas somaram a contestação política à contestação da própria arte, residindo neste último seu verdadeiro legado
Abstract: The purpose of this work is to research the artwork of Brazilian artists, that in the 60'sand 70's produced their artworks using as artistic procedure the appropriationand also the(re)insertionof theirartworks in the everyday life. Among those whostood out in the researched period, the following artists were chosen: Nelson Leirner, Cildo Meireles e Antonio Manuel. Better defining the concept of appropriationand retaking thehistory of this procedurein the arts,itwas possible to arise questions that this kind of process caused andthe changes in theparadigmthat it brought on. Since Duchamp's readymades until the détournement of the Situationists, the appropriation practice brought problematic situations, as theartist's authorship and the own nature of art. As thesupportfor that artworksare everyday lifeobjects, it was analyzed whatwould defineanappropriated object asan artwork, taking as a theorist reference, the definition of art of ArthurC.Danto (The transfiguration of the commonplace). Forthe Brazilian art, the first manifestations of this kindof art were researchedto see its adaptation to the ideals of national art. Taking into consideration the social,politicaland economic contextof the 60's and 70's mainly theAI-5 period, itwas possible to verify if thatcontext influenced theartwork creation of the researched artists. Having as reference the term 'contra-arte' - created bythe criticFredericoMorais to refer to the art of AI-5 generation,it wasconcluded that those artists added the political contestation to the contestation of the art itself, being the latter their true legacy
Mestre
APA, Harvard, Vancouver, ISO, and other styles
18

Jesus, Diana Vaz de [UNESP]. "Apropriação e inserção na contra-arte da geração AI-5." Universidade Estadual Paulista (UNESP), 2010. http://hdl.handle.net/11449/86892.

Full text
Abstract:
Made available in DSpace on 2014-06-11T19:22:27Z (GMT). No. of bitstreams: 0 Previous issue date: 2010-12-17Bitstream added on 2014-06-13T20:49:04Z : No. of bitstreams: 1 jesus_dv_me_ia.pdf: 3843326 bytes, checksum: f6400c5bee7f7f889be36cabedca2097 (MD5)
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Este trabalho tem como objeto de pesquisa obras de artistas brasileiros, que nas décadas de 1960 e 1970 produziram seus trabalhos utilizando como procedimento artístico a apropriação e também a (re) inserção de suas obras no cotidiano. Dentre os que mais se destacaram nesta prática no período estudado, selecionaram-se para análise os artistas: Nelson Leirner, Cildo Meireles e Antonio Manuel. Definindo melhor o conceito de apropriação e retomando a história desse procedimento na arte foi possível fazer um levantamento de questões que este tipo de prática aponta e as mudanças de paradigma que ocasionou. Desde os readymades de Duchamp aos détournement dos situacionistas, a prática da apropriação problematizou questões como a autoria do artista e a própria natureza da arte. Como o suporte de tais obras são objetos do cotidiano, foi também analisada a questão do que definiria um objeto apropriado como obra de arte, tendo como base teórica a definição de arte de Arthur C. Danto (A Transfiguração do lugar-comum). Partindo para a arte brasileira, buscaram-se as primeiras manifestações desta prática para verificar sua adaptação aos ideais de arte nacional. Retomando o contexto sócio-políticoeconômico dos anos 1960 e 1970, em particular o período do AI-5, foi possível analisar se tal contexto influenciou na criação das obras dos artistas analisados. Tendo como referência o termo ‘contra-arte’ - cunhado pelo crítico Frederico Morais para referir-se à arte da geração AI-5 - verificou-se que esses artistas somaram a contestação política à contestação da própria arte, residindo neste último seu verdadeiro legado
The purpose of this work is to research the artwork of Brazilian artists, that in the 60`sand 70´s produced their artworks using as artistic procedure the appropriationand also the(re)insertionof theirartworks in the everyday life. Among those whostood out in the researched period, the following artists were chosen: Nelson Leirner, Cildo Meireles e Antonio Manuel. Better defining the concept of appropriationand retaking thehistory of this procedurein the arts,itwas possible to arise questions that this kind of process caused andthe changes in theparadigmthat it brought on. Since Duchamp´s readymades until the détournement of the Situationists, the appropriation practice brought problematic situations, as theartist`s authorship and the own nature of art. As thesupportfor that artworksare everyday lifeobjects, it was analyzed whatwould defineanappropriated object asan artwork, taking as a theorist reference, the definition of art of ArthurC.Danto (The transfiguration of the commonplace). Forthe Brazilian art, the first manifestations of this kindof art were researchedto see its adaptation to the ideals of national art. Taking into consideration the social,politicaland economic contextof the 60´s and 70´s mainly theAI-5 period, itwas possible to verify if thatcontext influenced theartwork creation of the researched artists. Having as reference the term ‘contra-arte’ – created bythe criticFredericoMorais to refer to the art of AI-5 generation,it wasconcluded that those artists added the political contestation to the contestation of the art itself, being the latter their true legacy
APA, Harvard, Vancouver, ISO, and other styles
19

Almkvist, Jimmy. "Empirecraft." Thesis, Örebro universitet, Institutionen för naturvetenskap och teknik, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-41372.

Full text
Abstract:
I have in my thesis produced a start of a multiplayer, voxel, strategy sandbox game with advanced AI. The world is made out of voxels in the form of blocks that both the players and other units can affect and change. In a world where every block follows the laws of physics for both fluids and physics. The game is designed for several players that fights for controll over land and resources.
Jag har i mitt examensarbete producerat en början av ett flerspelar, voxel, strategi och sandlådespel med avancerad AI. Världen är uppbyggd av voxlar i form av block som både spelaren och andra enheter har möjlighet att påverka och förändra. En värld där varje block följer fysiska lagar för både vätska och fysik. Spelet är designat för flera spelare som strider om områden och resurser med hjälp av sina AI kontrollerade bybor.
APA, Harvard, Vancouver, ISO, and other styles
20

Jonsson, Hanna, and Luyolo Mazomba. "Revenue Generation in Data-driven Healthcare : An exploratory study of how big data solutions can be integrated into the Swedish healthcare system." Thesis, Umeå universitet, Företagsekonomi, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-161384.

Full text
Abstract:
Abstract The purpose of this study is to investigate how big data solutions in the Swedish healthcare system can generate a revenue. As technology continues to evolve, the use of big data is beginning to transform processes in many different industries, making them more efficient and effective. The opportunities presented by big data have been researched to a large extent in commercial fields, however, research in the use of big data in healthcare is scarce and this is particularly true in the case of Sweden. Furthermore, there is a lack in research that explores the interface between big data, healthcare and revenue models. The interface between these three fields of research is important as innovation and the integration of big data in healthcare could be affected by the ability of companies to generate a revenue from developing such innovations or solutions. Thus, this thesis aims to fill this gap in research and contribute to the limited body of knowledge that exists on this topic. The study conducted in this thesis was done via qualitative methods, in which a literature search was done and interviews were conducted with individuals who hold managerial positions at Region Västerbotten. The purpose of conducting these interviews was to establish a better understanding of the Swedish healthcare system and how its structure has influenced the use, or lack thereof, of big data in the healthcare delivery process, as well as, how this structure enables the generation of revenue through big data solutions. The data collected was analysed using the grounded theory approach which includes the coding and thematising of the empirical data in order to identify the key areas of discussion. The findings revealed that the current state of the Swedish healthcare system does not present an environment in which big data solutions that have been developed for the system can thrive and generate a revenue. However, if action is taken to make some changes to the current state of the system, then revenue generation may be possible in the future. The findings from the data also identified key barriers that need to be overcome in order to increase the integration of big data into the healthcare system. These barriers included the (i) lack of big data knowledge and expertise, (ii) data protection regulations, (iii) national budget allocation and the (iv) lack of structured data. Through collaborative work between actors in both the public and private sectors, these barriers can be overcome and Sweden could be on its way to transforming its healthcare system with the use of big data solutions, thus, improving the quality of care provided to its citizens. Key words: big data, healthcare, Swedish healthcare system, AI, revenue models, data-driven revenue models
APA, Harvard, Vancouver, ISO, and other styles
21

Wen, Tsung-Hsien. "Recurrent neural network language generation for dialogue systems." Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/275648.

Full text
Abstract:
Language is the principal medium for ideas, while dialogue is the most natural and effective way for humans to interact with and access information from machines. Natural language generation (NLG) is a critical component of spoken dialogue and it has a significant impact on usability and perceived quality. Many commonly used NLG systems employ rules and heuristics, which tend to generate inflexible and stylised responses without the natural variation of human language. However, the frequent repetition of identical output forms can quickly make dialogue become tedious for most real-world users. Additionally, these rules and heuristics are not scalable and hence not trivially extensible to other domains or languages. A statistical approach to language generation can learn language decisions directly from data without relying on hand-coded rules or heuristics, which brings scalability and flexibility to NLG. Statistical models also provide an opportunity to learn in-domain human colloquialisms and cross-domain model adaptations. A robust and quasi-supervised NLG model is proposed in this thesis. The model leverages a Recurrent Neural Network (RNN)-based surface realiser and a gating mechanism applied to input semantics. The model is motivated by the Long-Short Term Memory (LSTM) network. The RNN-based surface realiser and gating mechanism use a neural network to learn end-to-end language generation decisions from input dialogue act and sentence pairs; it also integrates sentence planning and surface realisation into a single optimisation problem. The single optimisation not only bypasses the costly intermediate linguistic annotations but also generates more natural and human-like responses. Furthermore, a domain adaptation study shows that the proposed model can be readily adapted and extended to new dialogue domains via a proposed recipe. Continuing the success of end-to-end learning, the second part of the thesis speculates on building an end-to-end dialogue system by framing it as a conditional generation problem. The proposed model encapsulates a belief tracker with a minimal state representation and a generator that takes the dialogue context to produce responses. These features suggest comprehension and fast learning. The proposed model is capable of understanding requests and accomplishing tasks after training on only a few hundred human-human dialogues. A complementary Wizard-of-Oz data collection method is also introduced to facilitate the collection of human-human conversations from online workers. The results demonstrate that the proposed model can talk to human judges naturally, without any difficulty, for a sample application domain. In addition, the results also suggest that the introduction of a stochastic latent variable can help the system model intrinsic variation in communicative intention much better.
APA, Harvard, Vancouver, ISO, and other styles
22

BRANCA, GIOVANNI. "Architectures and technologies for quality of service provisioning in next generation networks." Doctoral thesis, Università degli Studi di Cagliari, 2012. http://hdl.handle.net/11584/266146.

Full text
Abstract:
A NGN is a telecommunication network that differs from classical dedicated networks because of its capability to provide voice, video, data and cellular services on the same infrastructure (Quadruple-Play). The ITU-T standardization body has defined the NGN architecture in three different and well-defined strata: the transport stratum which takes care of maintaining end-to-end connectivity, the service stratum that is responsible for enabling the creation and the delivery of services, and finally the application stratum where applications can be created and executed. The most important separation in this architecture is relative to transport and service stratum. The aim is to enable the flexibility to add, maintain and remove services without any impact on the transport layer; to enable the flexibility to add, maintain and remove transport technologies without any impact on the access to service, application, content and information; and finally the efficient cohesistence of multiple terminals, access technologies and core transport technologies. The Service Oriented Architecture (SOA) is a paradigm often used in systems deployment and integration for organizing and utilizing distributed capabilities under the control of different ownership domains. In this thesis, the SOA technologies in network architetures are surveyed following the NGN functional architecture as defined by the ITU-T. Within each stratum, the main logical functions that have been the subject of investigation according to a service-oriented approach have been highlighted. Moreover, a new definition of the NGN transport stratum functionalities according to the SOA paradigm is proposed; an implementation of the relevant services interfaces to analyze this approach with experimental results shows some insight on the potentialities of the proposed strategy. Within NGN architectures research topic, especially in IP-based network architectures, Traffic Engineering (TE) is referred to as a set of policies and algorithms aimed at balancing network traffic load so as to improve network resource utilization and guarantee the service specific end-to-end QoS. DS-TE technology extends TE functionalities to a per-class basis implementation by introducing a higher level of traffic classification which associates to each class type (CT) a constraint on bandwidth utilization. These constraints are set by defining and configuring a bandwidth constraint (BC) model whih drives resource utilization aiming to higher load balancing, higher QoS performance and lower call blocking rate. Default TE implementations relies on a centralized approach to bandwidth and routing management, that require external management entities which periodically collect network status information and provide management actions. However, due to increasing network complexity, it is desiderable that nodes automatically discover their environment, self-configure and update to adapt to changes. In this thesis the bandwidth management problem is approached adopting an autonomic and distributed approach. Each node has a self-management module, which monitors the unreserved bandwidth in adjacent nodes and adjusts the local bandwidth constraints so as to reduce the differences in the unreserved bandwidth of neighbor nodes. With this distributed and autonomic algorithm, BC are dinamically modified to drive routing decision toward the traffic balancing respecting the QoS constraints for each class-type traffic requests. Finally, Video on Demand (VoD) is a service that provides a video whenever the customer requests it. Realizing a VoD system by means of the Internet network requires architectures tailored to video features such as guaranteed bandwidths and constrained transmission delays: these are hard to be provided in the traditional Internet architecture that is not designed to provide an adequate quality of service (QoS) and quality of experience (QoE) to the final user. Typical VoD solutions can be grouped in four categories: centralized, proxy-based, Content Delivery Network(CDN) and Hybrid architectures. Hybrid architectures combine the employment of a centralized server with that of a Peer-to-peer (P2P) network. This approach can effectively reduce the server load and avoid network congestions close to the server site because the peers support the delivery of the video to other peers using a cache-and-relay strategy making use of their upload bandwidth. Anyway, in a peer-to-peer network each peer is free to join and leave the network without notice, bringing to the phenomena of peer churns. These dynamics are dangerous for VoD architectures, affecting the integrity and retainability of the service. In this thesis, a study aimed to evaluate the impact of the peer churn on the system performance is proposed. Starting from important relationships between system parameters such as playback buffer length, peer request rate, peer average lifetime and server upload rate, four different analytic models are proposed.
APA, Harvard, Vancouver, ISO, and other styles
23

Nogueira, Yuri Lenon Barbosa. "IntegraÃÃo Mente e Ambiente para a GeraÃÃo de Comportamentos Emergentes em Personagens Virtuais AutÃnomos AtravÃs da EvoluÃÃo de Redes Neurais Artificiais." Universidade Federal do CearÃ, 2014. http://www.teses.ufc.br/tde_busca/arquivo.php?codArquivo=12686.

Full text
Abstract:
CoordenaÃÃo de AperfeÃoamento de Pessoal de NÃvel Superior
O senso de imersÃo do usuÃrio em um ambiente virtual requer nÃo somente alta qualidade visual grÃfica, mas tambÃm comportamentos adequados por parte dos personagens virtuais, isto Ã, com movimentos e aÃÃes que correspondam Ãs suas caracterÃsticas fÃsicas e aos eventos que ocorrem em seu meio. Nesse contexto, percebe-se o papel fundamental desempenhado pelo modo como os agentes se comportam em aplicaÃÃes de RV. O problema que permanece em aberto Ã: âComo obter comportamentos autÃnomos naturais e realistas de personagens virtuais?â. Um agente à dito autÃnomo se ele for capaz de gerar suas prÃprias normas (do grego autos, "a si mesmo", e nomos, "norma", "ordem"). Logo, autonomia implica em aÃÃes realizadas por um agente que resultam da estreita interaÃÃo entre suas dinÃmicas internas e os eventos ocorrendo no ambiente ao seu redor, ao invÃs de haver um controle externo ou uma especificaÃÃo de respostas em um plano prÃ-definido. Desse modo, um comportamento autÃnomo deveria refletir os detalhes da associaÃÃo entre o personagem e o ambiente, implicando em uma maior naturalidade e realismo nos movimentos. Assim, chega-se à proposta de que um comportamento à considerado natural se ele mantÃm coerÃncia entre o corpo do personagem e o ambiente ao seu redor. Para um observador externo, tal coerÃncia à percebida como comportamento inteligente. Essa noÃÃo resulta do atual debate, no campo da InteligÃncia Artificial, sobre o significado da inteligÃncia. Baseado nas novas tendÃncias surgidas dessas discussÃes, argumenta-se que o nÃvel de coerÃncia necessÃrio a um comportamento natural apenas pode ser alcanÃado atravÃs de tÃcnicas de emergÃncia. AlÃm da defesa conceitual da abordagem emergentista para a geraÃÃo de comportamento de personagens virtuais, este estudo apresenta novas tÃcnicas para a implementaÃÃo dessas ideias. Entre as contribuiÃÃes, està a proposta de um novo processo de codificaÃÃo e evoluÃÃo de Redes Neurais Artificiais que permite o desenvolvimento de controladores para explorar as possibilidades da geraÃÃo de comportamentos por emergÃncia. TambÃm à explorada a evoluÃÃo sem objetivo, atravÃs da simulaÃÃo da reproduÃÃo sexuada de personagens. Para validar a tese, foram desenvolvidos experimentos envolvendo um robà virtual. Os resultados apresentados mostram que a auto-organizaÃÃo de um sistema à de fato capaz de produzir um acoplamento Ãntimo entre agente e ambiente. Como consequÃncia da abordagem adotada, foram obtidos comportamentos bastante coerentes com as capacidades dos personagens e as condiÃÃes ambientais, com ou sem descriÃÃo de objetivos. Os mÃtodos propostos se mostraram sensÃveis a modificaÃÃes do ambiente e a modificaÃÃes no sensoriamento do robÃ, comprovando robustez ao gerar cÃrtices visuais funcionais, seja com sensores de proximidade, seja com cÃmeras virtuais, interpretando seus pixels. Ressalta-se tambÃm a geraÃÃo de diferentes tipos de comportamentos interessantes, sem qualquer descriÃÃo de objetivos, nos experimentos envolvendo reproduÃÃo simulada.
The userâs sense of immersion requires not only high visual quality of the virtual environment, but also accurate simulations of dynamics to ensure the reliability of the experience. In this context, the way the characters behave in a virtual environment plays a fundamental role. The problem that remains open is: âWhat needs to be done for autonomous virtual characters to display natural/realistic behaviors?â. A behavior is considered autonomous when the actions performed by the agent result from a close interaction between its internal dynamics and the circumstantial events in the environment, rather than from external control or specification dictated by a predefined plan. Thus, an autonomous behavior should reflect the details of the association between the character and its environment, resulting in greater naturalness and realistic movements. Therefore, it is proposed that the behavior is considered natural if it maintains coherence between the characterâs body and the environment surrounding it. To an external observer, such coherence is perceived as intelligent behavior. This notion of intelligent behavior arose from a current debate, in the field of Artificial Intelligence, about the meaning of intelligence. Based on the new trends that came out from those discussions, it is argued that the level of coherence required for natural behavior in complex situations can only be achieved through emergence. In addition to the conceptual support of the emergentist approach to generating behavior of virtual characters, this study presents new techniques for implementing those ideas. A contribution of this work is a novel technique for the enconding and evolution of Artificial Neural Networks, which allows the development of controllers to explore the possibilities of generating behaviors through emergence. Evolution without objective description is also explored through the simulation of sexual reproduction of characters. In order to validate the theory, experiments involving a virtual robot were developed. The results show that self-organization of a system is indeed able to produce an intimate coupling between agent and environment. As a consequence of the adopted approach, it were achieved behaviors quite consistent with the characterâs capabilities and environmental conditions, with or without description of objectives. The proposed methods were sensitive to changes in the environment and in the robotâs sensory apparatus, proving robustness on generating functional visual cortices, either with proximity sensors or with virtual cameras, interpreting its pixels. It is also emphasized the generation of different types of interesting behaviors, without any description of objectives, in experiments involving simulated reproduction.
APA, Harvard, Vancouver, ISO, and other styles
24

Wickman, Axel. "Exploring feasibility of reinforcement learning flight route planning." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-178314.

Full text
Abstract:
This thesis explores and compares traditional and reinforcement learning (RL) methods of performing 2D flight path planning in 3D space. A wide overview of natural, classic, and learning approaches to planning s done in conjunction with a review of some general recurring problems and tradeoffs that appear within planning. This general background then serves as a basis for motivating different possible solutions for this specific problem. These solutions are implemented, together with a testbed inform of a parallelizable simulation environment. This environment makes use of random world generation and physics combined with an aerodynamical model. An A* planner, a local RL planner, and a global RL planner are developed and compared against each other in terms of performance, speed, and general behavior. An autopilot model is also trained and used both to measure flight feasibility and to constrain the planners to followable paths. All planners were partially successful, with the global planner exhibiting the highest overall performance. The RL planners were also found to be more reliable in terms of both speed and followability because of their ability to leave difficult decisions to the autopilot. From this it is concluded that machine learning in general, and reinforcement learning in particular, is a promising future avenue for solving the problem of flight route planning in dangerous environments.
APA, Harvard, Vancouver, ISO, and other styles
25

PANTINI, SARA. "Analysis and modelling of leachate and gas generation at landfill sites focused on mechanically-biologically treated waste." Doctoral thesis, Università degli Studi di Roma "Tor Vergata", 2013. http://hdl.handle.net/2108/203393.

Full text
Abstract:
Despite significant efforts have been directed toward reducing waste generation and encouraging alternative waste management strategies, landfills still remain the main option for Municipal Solid Waste (MSW) disposal in many countries. Hence, landfills and related impacts on the surroundings are still current issues throughout the world. Actually, the major concerns are related to the potential emissions of leachate and landfill gas into the environment, that pose a threat to public health, surface and groundwater pollution, soil contamination and global warming effects. To ensure environmental protection and enhance landfill sustainability, modern sanitary landfills are equipped with several engineered systems with different functions. For instance, the installation of containment systems, such as bottom liner and multi-layers capping systems, is aimed at reducing leachate seepage and water infiltration into the landfill body as well as gas migration, while eventually mitigating methane emissions through the placement of active oxidation layers (biocovers). Leachate collection and removal systems are designed to minimize water head forming on the bottom section of the landfill and consequent seepages through the liner system. Finally, gas extraction and utilization systems, allow to recover energy from landfill gas while reducing explosion and fire risks associated with methane accumulation, even though much depends on gas collection efficiency achieved in the field (range: 60-90% Spokas et al., 2006; Huitric and Kong, 2006). Hence, impacts on the surrounding environment caused by the polluting substances released from the deposited waste through liquid and gas emissions can be potentially mitigated by a proper design of technical barriers and collection/extraction systems at the landfill site. Nevertheless, the long-term performance of containment systems to limit the landfill emissions is highly uncertain and is strongly dependent on site-specific conditions such as climate, vegetative covers, containment systems, leachate quality and applied stress. Furthermore, the design and operation of leachate collection and treatment systems, of landfill gas extraction and utilization projects, as well as the assessment of appropriate methane reduction strategies (biocovers), require reliable emission forecasts for the assessment of system feasibility and to ensure environmental compliance. To this end, landfill simulation models can represent an useful supporting tool for a better design of leachate/gas collection and treatment systems and can provide valuable information for the evaluation of best options for containment systems depending on their performances under the site-specific conditions. The capability in predicting future emissions levels at a landfill site can also be improved by combining simulation models with field observations at full-scale landfills and/or with experimental studies resembling landfill conditions. Indeed, this kind of data may allow to identify the main parameters and processes governing leachate and gas generation and can provide useful information for model refinement. In view of such need, the present research study was initially addressed to develop a new landfill screening model that, based on simplified mathematical and empirical equations, provides quantitative estimation of leachate and gas production over time, taking into account for site-specific conditions, waste properties and main landfill characteristics and processes. In order to evaluate the applicability of the developed model and the accuracy of emissions forecast, several simulations on four full-scale landfills, currently in operative management stage, were carried out. The results of these case studies showed a good correspondence of leachate estimations with monthly trend observed in the field and revealed that the reliability of model predictions is strongly influenced by the quality of input data. In particular, the initial waste moisture content and the waste compression index, which are usually data not available from a standard characterisation, were identified as the key unknown parameters affecting leachate production. Furthermore, the applicability of the model to closed landfills was evaluated by simulating different alternative capping systems and by comparing the results with those returned by the Hydrological Evaluation of Landfill Performance (HELP), which is the most worldwide used model for comparative analysis of composite liner systems. Despite the simplified approach of the developed model, simulated values of infiltration and leakage rates through the analysed cover systems were in line with those of HELP. However, it should be highlighted that the developed model provides an assessment of leachate and biogas production only from a quantitative point of view. The leachate and biogas composition was indeed not included in the forecast model, as strongly linked to the type of waste that makes the prediction in a screening phase poorly representative of what could be expected in the field. Hence, for a qualitative analysis of leachate and gas emissions over time, a laboratory methodology including different type of lab-scale tests was applied to a particular waste material. Specifically, the research was focused on mechanically biologically treated (MBT) wastes which, after the introduction of the European Landfill Directive 1999/31/EC (European Commission, 1999) that imposes member states to dispose of in landfills only wastes that have been preliminary subjected to treatment, are becoming the main flow waste landfilled in new Italian facilities. However, due to the relatively recent introduction of the MBT plants within the waste management system, very few data on leachate and gas emissions from MBT waste in landfills are available and, hence, the current knowledge mainly results from laboratory studies. Nevertheless, the assessment of the leaching characteristics of MBT materials and the evaluation of how the environmental conditions may affect the heavy metals mobility are still poorly investigated in literature. To gain deeper insight on the fundamental mechanisms governing the constituents release from MBT wastes, several leaching experiments were performed on MBT samples collected from an Italian MBT plant and the experimental results were modelled to obtain information on the long-term leachate emissions. Namely, a combination of experimental leaching tests were performed on fully-characterized MBT waste samples and the effect of different parameters, mainly pH and liquid to solid ratio (L/S,) on the compounds release was investigated by combining pH static-batch test, pH dependent tests and dynamic up-flow column percolation experiments. The obtained results showed that, even though MBT wastes were characterized by relatively high heavy metals content, only a limited amount was actually soluble and thus bioavailable. Furthermore, the information provided by the different tests highlighted the existence of a strong linear correlation between the release pattern of dissolved organic carbon (DOC) and several metals (Co, Cr, Cu, Ni, V, Zn), suggesting that complexation to DOC is the leaching controlling mechanism of these elements. Thus, combining the results of batch and up-flow column percolation tests, partition coefficients between DOC and metals concentration were derived. These data, coupled with a simplified screening model for DOC release, allowed to get a very good prediction of metal release during the experiments and may provide useful indications for the evaluation of long-term emissions from this type of waste in a landfill disposal scenario. In order to complete the study on the MBT waste environmental behaviour, gas emissions from MBT waste were examined by performing different anaerobic tests. The main purpose of this study was to evaluate the potential gas generation capacity of wastes and to assess possible implications on gas generation resulting from the different environmental conditions expected in the field. To this end, anaerobic batch tests were performed at a wide range of water contents (26-43 %w/w up to 75 %w/w on wet weight) and temperatures (from 20-25 °C up to 55 °C) in order to simulate different landfill management options (dry tomb or bioreactor landfills). In nearly all test conditions, a quite long lag-phase was observed (several months) due to the inhibition effects resulting from high concentrations of volatile fatty acids (VFAs) and ammonia that highlighted a poor stability degree of the analysed material. Furthermore, experimental results showed that the initial waste water content is the key factor limiting the anaerobic biological process. Indeed, when the waste moisture was lower than 32 %w/w the methanogenic microbial activity was completely inhibited. Overall, the obtained results indicated that the operative conditions drastically affect the gas generation from MBT waste, in terms of both gas yield and generation rate. This suggests that particular caution should be paid when using the results of lab-scale tests for the evaluation of long-term behaviour expected in the field, where the boundary conditions change continuously and vary significantly depending on the climate, the landfill operative management strategies in place (e.g. leachate recirculation, waste disposal methods), the hydraulic characteristics of buried waste, the presence and type of temporary and final cover systems.
APA, Harvard, Vancouver, ISO, and other styles
26

Kennell, Jonathan. "Generative Temporal Planning with Complex Processes." 2004. http://hdl.handle.net/1721.1/30472.

Full text
Abstract:
Autonomous vehicles are increasingly being used in mission-critical applications, and robust methods are needed for controlling these inherently unreliable and complex systems. This thesis advocates the use of model-based programming, which allows mission designers to program autonomous missions at the level of a coach or wing commander. To support such a system, this thesis presents the Spock generative planner. To generate plans, Spock must be able to piece together vehicle commands and team tactics that have a complex behavior represented by concurrent processes. This is in contrast to traditional planners, whose operators represent simple atomic or durative actions. Spock represents operators using the RMPL language, which describes behaviors using parallel and sequential compositions of state and activity episodes. RMPL is useful for controlling mobile autonomous missions because it allows mission designers to quickly encode expressive activity models using object-oriented design methods and an intuitive set of activity combinators. Spock also is significant in that it uniformly represents operators and plan-space processes in terms of Temporal Plan Networks, which support temporal flexibility for robust plan execution. Finally, Spock is implemented as a forward progression optimal planner that walks monotonically forward through plan processes, closing any open conditions and resolving any conflicts. This thesis describes the Spock algorithm in detail, along with example problems and test results.
APA, Harvard, Vancouver, ISO, and other styles
27

Hamdi, Abdullah. "Cascading Generative Adversarial Networks for Targeted." Thesis, 2018. http://hdl.handle.net/10754/627557.

Full text
Abstract:
Abundance of labelled data played a crucial role in the recent developments in computer vision, but that faces problems like scalability and transferability to the wild. One alternative approach is to utilize the data without labels, i.e. unsupervised learning, in learning valuable information and put it in use to tackle vision problems. Generative Adversarial Networks (GANs) have gained momentum for their ability to model image distributions in unsupervised manner. They learn to emulate the training set and that enables sampling from that domain and using the knowledge learned for useful applications. Several methods proposed enhancing GANs, including regularizing the loss with some feature matching. We seek to push GANs beyond the data in the training and try to explore unseen territory in the image manifold. We first propose a new regularizer for GAN based on K-Nearest Neighbor (K-NN) selective feature matching to a target set Y in high-level feature space, during the adversarial training of GAN on the base set X, and we call this novel model K-GAN. We show that minimizing the added term follows from cross-entropy minimization between the distributions of GAN and set Y. Then, we introduce a cascaded framework for GANs that try to address the task of imagining a new distribution that combines the base set X and target set Y by cascading sampling GANs with translation GANs, and we dub the cascade of such GANs as the Imaginative Adversarial Network (IAN). Several cascades are trained on a collected dataset Zoo-Faces and generated innovative samples are shown, including from K-GAN cascade. We conduct an objective and subjective evaluation for different IAN setups in the addressed task of generating innovative samples and we show the effect of regularizing GAN on different scores. We conclude with some useful applications for these IANs, like multi-domain manifold traversing.
APA, Harvard, Vancouver, ISO, and other styles
28

(7242737), Pradeep Periasamy. "Generative Adversarial Networks for Lupus Diagnostics." Thesis, 2019.

Find full text
Abstract:
The recent boom of Machine Learning Network Architectures like Generative Adversarial Networks (GAN), Deep Convolution Generative Adversarial Networks (DCGAN), Self Attention Generative Adversarial Networks (SAGAN), Context Conditional Generative Adversarial Networks (CCGAN) and the development of high-performance computing for big data analysis has the potential to be highly beneficial in many domains and fittingly in the early detection of chronic diseases. The clinical heterogeneity of one such chronic auto-immune disease like Systemic Lupus Erythematosus (SLE), also known as Lupus, makes it difficult for medical diagnostics. One major concern is a limited dataset that is available for diagnostics. In this research, we demonstrate the application of Generative Adversarial Networks for data augmentation and improving the error rates of Convolution Neural Networks (CNN). Limited Lupus dataset of 30 typical ’butterfly rash’ images is used as a model to decrease the error rates of a widely accepted CNN architecture like Le-Net. For the Lupus dataset, it can be seen that there is a 73.22% decrease in the error rates of Le-Net. Therefore such an approach can be extended to most recent Neural Network classifiers like ResNet. Additionally, a human perceptual study reveals that the artificial images generated from CCGAN are preferred to closely resemble real Lupus images over the artificial images generated from SAGAN and DCGAN by 45 Amazon MTurk participants. These participants are identified as ’healthcare professionals’ in the Amazon MTurk platform. This research aims to help reduce the time in detection and treatment of Lupus which usually takes 6 to 9 months from its onset.
APA, Harvard, Vancouver, ISO, and other styles
29

Beal, Jacob. "Generating Communications Systems Through Shared Context." 2002. http://hdl.handle.net/1721.1/7079.

Full text
Abstract:
In a distributed model of intelligence, peer components need to communicate with one another. I present a system which enables two agents connected by a thick twisted bundle of wires to bootstrap a simple communication system from observations of a shared environment. The agents learn a large vocabulary of symbols, as well as inflections on those symbols which allow thematic role-frames to be transmitted. Language acquisition time is rapid and linear in the number of symbols and inflections. The final communication system is robust and performance degrades gradually in the face of problems.
APA, Harvard, Vancouver, ISO, and other styles
30

Flimmel, Július. "Koevoluce AI a generování levelů do hry Super Mario." Master's thesis, 2020. http://www.nusl.cz/ntk/nusl-435274.

Full text
Abstract:
Procedural Content Generation is now used in many games to generate a wide variety of content. It often uses players controlled by Artificial Intelligence for its evaluation. PCG content can also be used when training AI players to achieve better generalization. In both of these fields, evolutionary algorithms are employed, but they are rarely used together. In this thesis, we use the coevolution of AI players and level generators for platformer game Super Mario. Coevolution's benefit is, that the AI players are evaluated by adapting level generators, and vice versa, level generators are evaluated by adapting AI players. This approach has two results. The first one is a creation of multiple level generators, each generating levels of gradually increased difficulty. Levels generated using a sequence of these generators also mirror the learning curve of the AI player. This can be useful also for human players playing the game for the first time. The second result is an AI player, which was evolved on gradually more difficult levels. Making it learn progressively may yield better results. Using the coevolution also doesn't require any training data set.
APA, Harvard, Vancouver, ISO, and other styles
31

Zhong, Sheng-Jun, and 鐘聖鈞. "Study the Use of Image Generation Techniques to Improve the Performance of AI Assisted Diabetic Retinopathy Detection." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/9f7wh3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Correia, Bernardo Nunes. "Modeling and Generation of Playable Scenarios Based on Societies of Agents." Master's thesis, 2021. http://hdl.handle.net/10316/96103.

Full text
Abstract:
Dissertação de Mestrado em Engenharia Informática apresentada à Faculdade de Ciências e Tecnologia
Geração procedimental de conteúdo em videojogos consiste em abordagens algorítmicas à geração de conteúdo de jogos autónoma ou semiautónoma. É utilizada à vários anos como uma forma de diminuir o conteúdo que artistas e designers têm de criar, para os assistir na criação de conteúdo, para diminuir a quantidade de conteúdo que é necessário guardar em memória, e para permitir a possibilidade de jogos que não necessitam de terminar. Vários métodos têm sido desenvolvidos, cada um com as suas vantagens e desvantagens. Métodos baseados em agentes geram conteúdo com a ajuda de, normalmente simples, agentes IA independentes que tomam decisões que afetam o resultado final de alguma maneira. Este trabalho procura criar uma plataforma que utilize sistemas complexos adaptativos de agentes para modelar cenários de jogo. Dois cenários de jogo foram utilizados como prova de conceito. Um deles utiliza o famoso "Game of Life" de Conway, e o segundo uma adaptação do jogo de arcada "Bomberman". Uma interface gráfica do utilizador foi desenvolvida de modo a dar aos utilizadores uma forma de ver, interagir com, e editar a simulação. Uma ferramenta de cocriação baseada em geração procedimental de conteúdo foi também desenvolvida para auxiliar o utilizador na edição da simulação. A ferramenta usa o algoritmo "Wave Function Collapse" para propagar os padrões de uma área selecionada para o resto da grelha de simulação. A arquitetura desenvolvida teve sucesso em dar aos seus utilizadores o controlo necessário para incentivar a exploração dos cenários de jogo desenvolvidos. Tal plataforma poderá ser usada como uma base para o teste e exploração de abordagens de geração procedimental de conteúdo.
Procedural content generation in video games consists of algorithmic approaches to generate game content autonomously or semi-autonomously. It has been used for several years as a way to diminish the authorial burden of artists and designers, to assist them in the creation of the content, to diminish the amount of content needed to be stored in memory, and to enable the possibility of games that do not need to end. Several methods have been developed, each one with its advantages and disadvantages. Agent-based methods generate game artifacts with the help of, often very simplistic, AI agents that independently make decisions that affect the end result in some way. This work aims at creating a platform that utilizes complex adaptive systems of these agents to model game scenarios. Two proof of concept game scenarios were created. One of them used the famous Conway's Game of Life and the other an adaptation of the arcade game "Bomberman". A graphical user interface was developed in order to give users a way to view, interact with, and edit the simulation. A Procedural content generation-based co-creation tool was also developed to further aid the user. The tool uses the Wave Function Collapse algorithm to propagate the pattern style of a selected area to the rest of the simulation grid. The developed architecture is successful in giving the user the control needed to incentivize exploration of the developed game scenarios. Such a platform could be used as a base for the testing and exploration of PCG approaches.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography