Добірка наукової літератури з теми "Generative competitive neural network"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Generative competitive neural network".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "Generative competitive neural network"

1

Kuznetsov, A. V., and M. V. Gashnikov. "Remote sensing data retouching based on image inpainting algorithms in the forgery generation problem." Computer Optics 44, no. 5 (October 2020): 763–71. http://dx.doi.org/10.18287/2412-6179-co-721.

Повний текст джерела
Анотація:
We investigate image retouching algorithms for generating forgery Earth remote sensing data. We provide an overview of existing neural network solutions in the field of generation and inpainting of remote sensing images. To retouch Earth remote sensing data, we use imageinpainting algorithms based on convolutional neural networks and generative-adversarial neural networks. We pay special attention to a generative neural network with a separate contour prediction block that includes two series-connected generative-adversarial subnets. The first subnet inpaints contours of the image within the retouched area. The second subnet uses the inpainted contours to generate the resulting retouch area. As a basis for comparison, we use exemplar-based algorithms of image inpainting. We carry out computational experiments to study the effectiveness of these algorithms when retouching natural data of remote sensing of various types. We perform a comparative analysis of the quality of the algorithms considered, depending on the type, shape and size of the retouched objects and areas. We give qualitative and quantitative characteristics of the efficiency of the studied image inpainting algorithms when retouching Earth remote sensing data. We experimentally prove the advantage of generative-competitive neural networks in the construction of forgery remote sensing data.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Tsibulis, Dmitry E., Andrey N. Ragozin, Stanislav N. Darovskikh, and Askar Z. Kulganatov. "Study of nonlinear digital filtering of signals using generative competitive neural network." Bulletin of the South Ural State University. Ser. Computer Technologies, Automatic Control & Radioelectronics 22, no. 2 (April 2022): 158–67. http://dx.doi.org/10.14529/ctcr220215.

Повний текст джерела
Анотація:
The article presents the results of the study, as well as the structural schemes and parameters of the components of the generative-adversarial neural network. Graphical images of the results of filtering radio signals are given. Conclusions are drawn about the possibilities of using these neural networks. The purpose of the study. Substantiation of the possibilities of using generative-sensory artificial neural networks to solve problems of digital processing of radio signals. Materials and methods. To evaluate the results of digital filtering of noisy signals, the method of mathematical modeling in the Matlab environment was used. As test signals, the following were taken: a sine wave, a signal in the form of a sum of sinusoids, a model of a real radio-technical information signal. White Gaussian noise is used as the noise component. Also, filtering of the signal is carried out, in which there is no fragment of a certain length. A training sample was generated for the neural network of the generator, consisting of noisy test signals. A training sample of the discriminator neural network was also generated, consisting of test signals that do not contain noise. Results. Based on the simulation, it is concluded that the generative-adversarial neural network successfully solves the problems of isolating a useful signal in a mixture of it with noise of various physical nature. Such a neural network structure is also able to restore a useful signal if any part of it is missing as a result of external interference. Conclusion. The existing methods of digital filtering of radio signals require certain labor and time costs associated with the calculation of digital filters. Also, when designing high-order filters, it becomes difficult to calculate these filters. The idea of using a neural network in filtering tasks makes it possible to significantly reduce the filter design time, thus simplifying the process of its implementation. A neural network, which is a self-learning system, can find solutions that are inaccessible to conventional digital filtering algorithms. The results of this work can find their application in the field of digital signal processing and in the development of software-configurable radio.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Marton, Sascha, Stefan Lüdtke, and Christian Bartelt. "Explanations for Neural Networks by Neural Networks." Applied Sciences 12, no. 3 (January 18, 2022): 980. http://dx.doi.org/10.3390/app12030980.

Повний текст джерела
Анотація:
Understanding the function learned by a neural network is crucial in many domains, e.g., to detect a model’s adaption to concept drift in online learning. Existing global surrogate model approaches generate explanations by maximizing the fidelity between the neural network and a surrogate model on a sample-basis, which can be very time-consuming. Therefore, these approaches are not applicable in scenarios where timely or frequent explanations are required. In this paper, we introduce a real-time approach for generating a symbolic representation of the function learned by a neural network. Our idea is to generate explanations via another neural network (called the Interpretation Network, or I-Net), which maps network parameters to a symbolic representation of the network function. We show that the training of an I-Net for a family of functions can be performed up-front and subsequent generation of an explanation only requires querying the I-Net once, which is computationally very efficient and does not require training data. We empirically evaluate our approach for the case of low-order polynomials as explanations, and show that it achieves competitive results for various data and function complexities. To the best of our knowledge, this is the first approach that attempts to learn mapping from neural networks to symbolic representations.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Shin, Wonsup, Seok-Jun Bu, and Sung-Bae Cho. "3D-Convolutional Neural Network with Generative Adversarial Network and Autoencoder for Robust Anomaly Detection in Video Surveillance." International Journal of Neural Systems 30, no. 06 (May 28, 2020): 2050034. http://dx.doi.org/10.1142/s0129065720500343.

Повний текст джерела
Анотація:
As the surveillance devices proliferate, various machine learning approaches for video anomaly detection have been attempted. We propose a hybrid deep learning model composed of a video feature extractor trained by generative adversarial network with deficient anomaly data and an anomaly detector boosted by transferring the extractor. Experiments with UCSD pedestrian dataset show that it achieves 94.4% recall and 86.4% precision, which is the competitive performance in video anomaly detection.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Forster, Dennis, Abdul-Saboor Sheikh, and Jörg Lücke. "Neural Simpletrons: Learning in the Limit of Few Labels with Directed Generative Networks." Neural Computation 30, no. 8 (August 2018): 2113–74. http://dx.doi.org/10.1162/neco_a_01100.

Повний текст джерела
Анотація:
We explore classifier training for data sets with very few labels. We investigate this task using a neural network for nonnegative data. The network is derived from a hierarchical normalized Poisson mixture model with one observed and two hidden layers. With the single objective of likelihood optimization, both labeled and unlabeled data are naturally incorporated into learning. The neural activation and learning equations resulting from our derivation are concise and local. As a consequence, the network can be scaled using standard deep learning tools for parallelized GPU implementation. Using standard benchmarks for nonnegative data, such as text document representations, MNIST, and NIST SD19, we study the classification performance when very few labels are used for training. In different settings, the network's performance is compared to standard and recently suggested semisupervised classifiers. While other recent approaches are more competitive for many labels or fully labeled data sets, we find that the network studied here can be applied to numbers of few labels where no other system has been reported to operate so far.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Wang, Zheng, and Qingbiao Wu. "An Integrated Deep Generative Model for Text Classification and Generation." Mathematical Problems in Engineering 2018 (August 19, 2018): 1–8. http://dx.doi.org/10.1155/2018/7529286.

Повний текст джерела
Анотація:
Text classification and generation are two important tasks in the field of natural language processing. In this paper, we deal with both tasks via Variational Autoencoder, which is a powerful deep generative model. The self-attention mechanism is introduced to the encoder. The modified encoder extracts the global feature of the input text to produce the hidden code, and we train a neural network classifier based on the hidden code to perform the classification. On the other hand, the label of the text is fed into the decoder explicitly to enhance the categorization information, which could help with text generation. The experiments have shown that our model could achieve competitive classification results and the generated text is realistic. Thus the proposed integrated deep generative model could be an alternative for both tasks.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Huang, Wenlong, Brian Lai, Weijian Xu, and Zhuowen Tu. "3D Volumetric Modeling with Introspective Neural Networks." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 8481–88. http://dx.doi.org/10.1609/aaai.v33i01.33018481.

Повний текст джерела
Анотація:
In this paper, we study the 3D volumetric modeling problem by adopting the Wasserstein introspective neural networks method (WINN) that was previously applied to 2D static images. We name our algorithm 3DWINN which enjoys the same properties as WINN in the 2D case: being simultaneously generative and discriminative. Compared to the existing 3D volumetric modeling approaches, 3DWINN demonstrates competitive results on several benchmarks in both the generation and the classification tasks. In addition to the standard inception score, the Frechet Inception Distance (FID) metric is´ also adopted to measure the quality of 3D volumetric generations. In addition, we study adversarial attacks for volumetric data and demonstrate the robustness of 3DWINN against adversarial examples while achieving appealing results in both classification and generation within a single model. 3DWINN is a general framework and it can be applied to the emerging tasks for 3D object and scene modeling.1
Стилі APA, Harvard, Vancouver, ISO та ін.
8

V.M., Sineglazov, and Chumachenko O.I. "Structural-parametric synthesis of deep learning neural networks." Artificial Intelligence 25, no. 4 (December 25, 2020): 42–51. http://dx.doi.org/10.15407/jai2020.04.042.

Повний текст джерела
Анотація:
The structural-parametric synthesis of neural networks of deep learning, in particular convolutional neural networks used in image processing, is considered. The classification of modern architectures of convolutional neural networks is given. It is shown that almost every convolutional neural network, depending on its topology, has unique blocks that determine its essential features (for example, Squeeze and Excitation Block, Convolutional Block of Attention Module (Channel attention module, Spatial attention module), Residual block, Inception module, ResNeXt block. It is stated the problem of structural-parametric synthesis of convolutional neural networks, for the solution of which it is proposed to use a genetic algorithm. The genetic algorithm is used to effectively overcome a large search space: on the one hand, to generate possible topologies of the convolutional neural network, namely the choice of specific blocks and their locations in the structure of the convolutional neural network, and on the other hand to solve the problem of structural-parametric synthesis of convolutional neural network of selected topology. The most significant parameters of the convolutional neural network are determined. An encoding method is proposed that allows to repre- sent each network structure in the form of a string of fixed length in binary format. After that, several standard genetic operations were identified, i.e. selection, mutation and crossover, which eliminate weak individuals of the previous generation and use them to generate competitive ones. An example of solving this problem is given, a database (ultrasound results) of patients with thyroid disease was used as a training sample.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Lee, Byong Kwon. "A Combining AI Algorithm for the Restoration of Damaged Cultural Properties." Webology 19, no. 1 (January 20, 2022): 4384–95. http://dx.doi.org/10.14704/web/v19i1/web19288.

Повний текст джерела
Анотація:
Through the research of numerous researchers’ artificial intelligence imitates human language and visual expression with good performance and imitates human style in voice and picture. This ability although dependent on the data for learning artificial intelligence is more objective and based on numerical data than humans. We applied it to the restoration of cultural assets made in the past through artificial intelligence neural networks and we applied a general CNN a little differently for the purpose of restoration. Cultural properties contain various backgrounds from the era when they were created and for this reason there are many complications and difficulties in restoration. If it is simply regarded as noise and recovered the result is dependent on the learned data. To solve this problem the CNN was separated into full and detailed and the association was learned together and the damaged part was repaired through a generative competition network (GAN) based on this neural network. We trained a neural network that extracts visual features on a Korean "Pagoda" (mostly produced under the influence of Buddhism) and conducted a study to repair the damaged part based on the trained neural network. The features of the tower were extracted through a CNN-based neural network and the damaged part was repaired through a Generative Adversarial Network (GAN) based on the extracted features. It is thought that our research will be actively used for the restoration of cultural assets as well as the restoration of archaeological records in the future.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Wang, Zilin, Zhaoxiang Zhang, Limin Dong, and Guodong Xu. "Jitter Detection and Image Restoration Based on Generative Adversarial Networks in Satellite Images." Sensors 21, no. 14 (July 9, 2021): 4693. http://dx.doi.org/10.3390/s21144693.

Повний текст джерела
Анотація:
High-resolution satellite images (HRSIs) obtained from onboard satellite linear array cameras suffer from geometric disturbance in the presence of attitude jitter. Therefore, detection and compensation of satellite attitude jitter are crucial to reduce the geopositioning error and to improve the geometric accuracy of HRSIs. In this work, a generative adversarial network (GAN) architecture is proposed to automatically learn and correct the deformed scene features from a single remote sensing image. In the proposed GAN, a convolutional neural network (CNN) is designed to discriminate the inputs, and another CNN is used to generate so-called fake inputs. To explore the usefulness and effectiveness of a GAN for jitter detection, the proposed GANs are trained on part of the PatternNet dataset and tested on three popular remote sensing datasets, along with a deformed Yaogan-26 satellite image. Several experiments show that the proposed model provides competitive results. The proposed GAN reveals the enormous potential of GAN-based methods for the analysis of attitude jitter from remote sensing images.
Стилі APA, Harvard, Vancouver, ISO та ін.

Дисертації з теми "Generative competitive neural network"

1

Гайдук, Ірина Вадимівна. "Вирішення транспортної задачі методами машинного навчання". Master's thesis, КПІ ім. Ігоря Сікорського, 2021. https://ela.kpi.ua/handle/123456789/46504.

Повний текст джерела
Анотація:
Магістерська дисертація: 87 с., 27 рисунків, 24 таблиці, 21 джерело. В роботі розглянута класична задача оптимального транспортування. Проведено дослідження відомих методів її вирішення, їх переваги та недоліки, необхідні умови існування оптимального розв’язку. Окрім цього, був запропонований машинний метод вирішення задачі з побудовою та навчанням моделі на основі генеративної нейронної мережі. В роботі було розглянуто загальні відомості про методи вирішення задачі оптимального транспортування при її незбалансованості та масштабованості. Було виконано аналіз результатів трьох різних типів задач, вирішених методом машинного навчання. Об’єктом дослідження є класична задача оптимального транспортування у трьох різних видах. Предметом дослідження є методи машинного навчання, зокрема генеративна змагальна нейронна мережа.
Master’s thesis: 87 pages, 27 figures, 24 tables, 21 sources. Theme: The classical problem of optimal transportation. The conducted research solves it by known methods, their advantages and disadvantages, the necessary conditions for the existence of an optimal solution. This was a proposed machine method for solving problems with the construction and model of learning based on a generative neural network. The paper considered general information on the method of solving the problem of optimal transportation with its unbalance and scalability. The results of three different types of problems solved by the machine learning method were analyzed. The subject of the study is the classical problem of optimal transportation in three different types. The subject of research is the methods of machine learning, in particular the generative competitive neural network.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Liu, Mengxin. "Generative Neural Network for Portfolio Optimization." Thesis, Mälardalens högskola, Akademin för utbildning, kultur och kommunikation, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-53027.

Повний текст джерела
Анотація:
This thesis aims to overcome the drawbacks of traditional portfolio optimization by employing Generative Deep Neural Networks on real stock data. The proposed framework is capable of generating return data that have similar statistical characteristics as the original stock data. The result is acquired using Monte Carlo simulation method and presented in terms of individual risk. This method is tested on real Swedish stock market data. A practical example demonstrates how to optimize a portfolio based on the output of the proposed Generative Adversarial Networks.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Yamazaki, Hiroyuki Vincent. "On Depth and Complexity of Generative Adversarial Networks." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-217293.

Повний текст джерела
Анотація:
Although generative adversarial networks (GANs) have achieved state-of-the-art results in generating realistic look- ing images, they are often parameterized by neural net- works with relatively few learnable weights compared to those that are used for discriminative tasks. We argue that this is suboptimal in a generative setting where data is of- ten entangled in high dimensional space and models are ex- pected to benefit from high expressive power. Additionally, in a generative setting, a model often needs to extrapo- late missing information from low dimensional latent space when generating data samples while in a typical discrimina- tive task, the model only needs to extract lower dimensional features from high dimensional space. We evaluate different architectures for GANs with varying model capacities using shortcut connections in order to study the impacts of the capacity on training stability and sample quality. We show that while training tends to oscillate and not benefit from additional capacity of naively stacked layers, GANs are ca- pable of generating samples with higher quality, specifically for images, samples of higher visual fidelity given proper regularization and careful balancing.
Trots att Generative Adversarial Networks (GAN) har lyckats generera realistiska bilder består de än idag av neurala nätverk som är parametriserade med relativt få tränbara vikter jämfört med neurala nätverk som används för klassificering. Vi tror att en sådan modell är suboptimal vad gäller generering av högdimensionell och komplicerad data och anser att modeller med högre kapaciteter bör ge bättre estimeringar. Dessutom, i en generativ uppgift så förväntas en modell kunna extrapolera information från lägre till högre dimensioner medan i en klassificeringsuppgift så behöver modellen endast att extrahera lågdimensionell information från högdimensionell data. Vi evaluerar ett flertal GAN med varierande kapaciteter genom att använda shortcut connections för att studera hur kapaciteten påverkar träningsstabiliteten, samt kvaliteten av de genererade datapunkterna. Resultaten visar att träningen blir mindre stabil för modeller som fått högre kapaciteter genom naivt tillsatta lager men visar samtidigt att datapunkternas kvaliteter kan öka, specifikt för bilder, bilder med hög visuell fidelitet. Detta åstadkoms med hjälp utav regularisering och noggrann balansering.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Aftab, Nadeem. "Disocclusion Inpainting using Generative Adversarial Networks." Thesis, Mittuniversitetet, Institutionen för informationssystem och –teknologi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-40502.

Повний текст джерела
Анотація:
The old methods used for images inpainting of the Depth Image Based Rendering (DIBR) process are inefficient in producing high-quality virtual views from captured data. From the viewpoint of the original image, the generated data’s structure seems less distorted in the virtual view obtained by translation but when then the virtual view involves rotation, gaps and missing spaces become visible in the DIBR generated data. The typical approaches for filling the disocclusion tend to be slow, inefficient, and inaccurate. In this project, a modern technique Generative Adversarial Network (GAN) is used to fill the disocclusion. GAN consists of two or more neural networks that compete against each other and get trained. This study result shows that GAN can inpaint the disocclusion with a consistency of the structure. Additionally, another method (Filling) is used to enhance the quality of GAN and DIBR images. The statistical evaluation of results shows that GAN and filling method enhance the quality of DIBR images.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Amartur, Sundar C. "Competitive recurrent neural network model for clustering of multispectral data." Case Western Reserve University School of Graduate Studies / OhioLINK, 1995. http://rave.ohiolink.edu/etdc/view?acc_num=case1058445974.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Daley, Jr John. "Generating Synthetic Schematics with Generative Adversarial Networks." Thesis, Högskolan Kristianstad, Fakulteten för naturvetenskap, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:hkr:diva-20901.

Повний текст джерела
Анотація:
This study investigates synthetic schematic generation using conditional generative adversarial networks, specifically the Pix2Pix algorithm was implemented for the experimental phase of the study. With the increase in deep neural network’s capabilities and availability, there is a demand for verbose datasets. This in combination with increased privacy concerns, has led to synthetic data generation utilization. Analysis of synthetic images was completed using a survey. Blueprint images were generated and were successful in passing as genuine images with an accuracy of 40%. This study confirms the ability of generative neural networks ability to produce synthetic blueprint images.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Ionascu, Beatrice. "Modelling user interaction at scale with deep generative methods." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-239333.

Повний текст джерела
Анотація:
Understanding how users interact with a company's service is essential for data-driven businesses that want to better cater to their users and improve their offering. By using a generative machine learning approach it is possible to model user behaviour and generate new data to simulate or recognize and explain typical usage patterns. In this work we introduce an approach for modelling users' interaction behaviour at scale in a client-service model. We propose a novel representation of multivariate time-series data as time pictures that express temporal correlations through spatial organization. This representation shares two key properties that convolutional networks have been built to exploit and allows us to develop an approach based on deep generative models that use convolutional networks as backbone. In introducing this approach of feature learning for time-series data, we expand the application of convolutional neural networks in the multivariate time-series domain, and specifically user interaction data. We adopt a variational approach inspired by the β-VAE framework in order to learn hidden factors that define different user behaviour patterns. We explore different values for the regularization parameter β and show that it is possible to construct a model that learns a latent representation of identifiable and different user behaviours. We show on real-world data that the model generates realistic samples, that capture the true population-level statistics of the interaction behaviour data, learns different user behaviours, and provides accurate imputations of missing data.
Förståelse för hur användare interagerar med ett företags tjänst är essentiell för data-drivna affärsverksamheter med ambitioner om att bättre tillgodose dess användare och att förbättra deras utbud. Generativ maskininlärning möjliggör modellering av användarbeteende och genererande av ny data i syfte att simulera eller identifiera och förklara typiska användarmönster. I detta arbete introducerar vi ett tillvägagångssätt för storskalig modellering av användarinteraktion i en klientservice-modell. Vi föreslår en ny representation av multivariat tidsseriedata i form av tidsbilder vilka representerar temporala korrelationer via spatial organisering. Denna representation delar två nyckelegenskaper som faltningsnätverk har utvecklats för att exploatera, vilket tillåter oss att utveckla ett tillvägagångssätt baserat på på djupa generativa modeller som bygger på faltningsnätverk. Genom att introducera detta tillvägagångssätt för tidsseriedata expanderar vi applicering av faltningsnätverk inom domänen för multivariat tidsserie, specifikt för användarinteraktionsdata. Vi använder ett tillvägagångssätt inspirerat av ramverket β-VAE i syfte att lära modellen gömda faktorer som definierar olika användarmönster. Vi utforskar olika värden för regulariseringsparametern β och visar att det är möjligt att konstruera en modell som lär sig en latent representation av identifierbara och multipla användarbeteenden. Vi visar med verklig data att modellen genererar realistiska exempel vilka i sin tur fångar statistiken på populationsnivå hos användarinteraktionsdatan, samt lär olika användarbeteenden och bidrar med precisa imputationer av saknad data.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Pagliarini, Silvia. "Modeling the neural network responsible for song learning." Thesis, Bordeaux, 2021. http://www.theses.fr/2021BORD0107.

Повний текст джерела
Анотація:
Pendant la première période de leur vie, les bébés et les jeunes oiseaux présentent des phases de développement vocal comparables : ils écoutent d'abord leurs parents/tuteurs afin de construire une représentation neurale du stimulus auditif perçu, puis ils commencent à produire des sons qui se rapprochent progressivement du chant de leur tuteur. Cette phase d'apprentissage est appelée la phase sensorimotrice et se caractérise par la présence de babillage. Elle se termine lorsque le chant se cristallise, c'est-à-dire lorsqu'il devient semblable à celui produit par les adultes.Il y a des similitudes entre les voies cérébrales responsables de l'apprentissage sensorimoteur chez l'homme et chez les oiseaux. Dans les deux cas, une voie s’occupe de la production vocale et implique des projections directes des zones auditives vers les zones motrices, et une autre voie s’occupe de l’apprentissage vocal, de l'imitation et de la plasticité.Chez les oiseaux, ces circuits cérébraux sont exclusivement dédiés à l'apprentissage du chant, ce qui en fait un modèle idéal pour explorer les mécanismes neuronaux de l’apprentissage vocal par imitation.Cette thèse vise à construire un modèle de l'apprentissage du chant des oiseaux par imitation. De nombreuses études antérieures ont tenté de mettre en œuvre l'apprentissage par imitation dans des modèles informatiques et partagent une structure commune. Ces modèles comprennent des mécanismes d'apprentissage et, éventuellement, des stratégies d'exploration et d'évaluation.Dans ces modèles, une fonction de contrôle moteur permet la production de sons et une réponse sensorielle modélise soit la façon dont le son est perçu, soit la façon dont il façonne la récompense. Les entrées et les sorties de ces fonctions sont dans plusieurs espaces: l'espace moteur (paramètres moteurs), l'espace sensoriel (sons réels), l'espace perceptif (représentation à faible dimension du son) ou l’espace des objectifs (représentation non perceptive du son cible).Le premier modèle proposé est un modèle théorique inverse basé sur un modèle d'apprentissage vocal simplifié où l'espace sensoriel coïncide avec l'espace moteur (c'est-à-dire qu'il n'y a pas de production sonore). Une telle simplification permet d'étudier comment introduire des hypothèses biologiques (par exemple, une réponse non linéaire) dans un modèle d'apprentissage vocal et quels sont les paramètres qui influencent le plus la puissance de calcul du modèle.Afin de disposer d'un modèle complet (capable de percevoir et de produire des sons), nous avions besoin d'une fonction de contrôle moteur capable de reproduire des sons similaires à des données réelles. Nous avons analysé la capacité de WaveGAN (un réseau de génération) à produire des chants de canari réalistes. Dans ce modèle, l'espace d'entrée devient l'espace latent après l'entraînement et permet la représentation d'un ensemble de données à haute dimension dans une variété à plus basse dimension. Nous avons obtenu des chants de canari réalistes en utilisant seulement trois dimensions pour l'espace latent. Des analyses quantitatives et qualitatives démontrent les capacités d'interpolation du modèle, ce qui suggère que le modèle peut être utilisé comme fonction motrice dans un modèle d'apprentissage vocal.La deuxième version du modèle est un modèle d'apprentissage vocal complet avec une boucle action-perception complète (il comprend l'espace moteur, l'espace sensoriel et l'espace perceptif). La production sonore est réalisée par le générateur GAN obtenu précédemment. Un réseau neuronal récurrent classant les syllabes sert de réponse sensorielle perceptive. La correspondance entre l'espace perceptuel et l'espace moteur est apprise par un modèle inverse. Les résultats préliminaires montrent l'impact du taux d'apprentissage lorsque différentes fonctions de réponse sensorielle sont mises en œuvre
During the first period of their life, babies and juvenile birds show comparable phases of vocal development: first, they listen to their parents/tutors in order to build a neural representation of the experienced auditory stimulus, then they start to produce sound and progressively get closer to reproducing their tutor song. This phase of learning is called the sensorimotor phase and is characterized by the presence of babbling, in babies, and subsong, in birds. It ends when the song crystallizes and becomes similar to the one produced by the adults.It is possible to find analogies between brain pathways responsible for sensorimotor learning in humans and birds: a vocal production pathway involves direct projections from auditory areas to motor neurons, and a vocal learning pathway is responsible for imitation and plasticity. The behavioral studies and the neuroanatomical structure of the vocal control circuit in humans and birds provide the basis for bio-inspired models of vocal learning.In particular, birds have brain circuits exclusively dedicated to song learning, making them an ideal model for exploring the representation of vocal learning by imitation of tutors.This thesis aims to build a vocal learning model underlying song learning in birds. An extensive review of the existing literature is discussed in the thesis: many previous studies have attempted to implement imitative learning in computational models and share a common structure. These learning architectures include the learning mechanisms and, eventually, exploration and evaluation strategies. A motor control function enables sound production and sensory response models either how sound is perceived or how it shapes the reward. The inputs and outputs of these functions lie (1)~in the motor space (motor parameters’ space), (2)~in the sensory space (real sounds) and (3)~either in the perceptual space (a low dimensional representation of the sound) or in the internal representation of goals (a non-perceptual representation of the target sound).The first model proposed in this thesis is a theoretical inverse model based on a simplified vocal learning model where the sensory space coincides with the motor space (i.e., there is no sound production). Such a simplification allows us to investigate how to introduce biological assumptions (e.g. non-linearity response) into a vocal learning model and which parameters influence the computational power of the model the most. The influence of the sharpness of auditory selectivity and the motor dimension are discussed.To have a complete model (which is able to perceive and produce sound), we needed a motor control function capable of reproducing sounds similar to real data (e.g. recordings of adult canaries). We analyzed the capability of WaveGAN (a Generative Adversarial Network) to provide a generator model able to produce realistic canary songs. In this generator model, the input space becomes the latent space after training and allows the representation of a high-dimensional dataset in a lower-dimensional manifold. We obtained realistic canary sounds using only three dimensions for the latent space. Among other results, quantitative and qualitative analyses demonstrate the interpolation abilities of the model, which suggests that the generator model we studied can be used as a motor function in a vocal learning model.The second version of the sensorimotor model is a complete vocal learning model with a full action-perception loop (i.e., it includes motor space, sensory space, and perceptual space). The sound production is performed by the GAN generator previously obtained. A recurrent neural network classifying syllables serves as the perceptual sensory response. Similar to the first model, the mapping between the perceptual space and the motor space is learned via an inverse model. Preliminary results show the influence of the learning rate when different sensory response functions are implemented
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Gustafsson, Alexander, and Jonatan Linberg. "Investigation of generative adversarial network training : The effect of hyperparameters on training time and stability." Thesis, Högskolan i Skövde, Institutionen för informationsteknologi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-19847.

Повний текст джерела
Анотація:
Generative Adversarial Networks (GAN) is a technique used to learn the distribution of some dataset in order to generate similar data. GAN models are notoriously difficult to train, which has caused limited deployment in the industry. The results of this study can be used to accelerate the process of making GANs production ready. An experiment was conducted where multiple GAN models were trained, with the hyperparameters Leaky ReLU alpha, convolutional filters, learning rate and batch size as independent variables. A Mann-Whitney U-test was used to compare the training time and training stability of each model to the others’. Except for the Leaky ReLU alpha, changes to the investigated hyperparameters had a significant effect on the training time and stability. This study is limited to a few hyperparameters and values, a single dataset and few data points, further research in the area could look at the generalisability of the results or investigate more hyperparameters.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Zheng, Yilin. "Text-Based Speech Video Synthesis from a Single Face Image." The Ohio State University, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=osu1572168353691788.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Книги з теми "Generative competitive neural network"

1

Panzironi, Francesca. Networks. Oxford University Press, 2017. http://dx.doi.org/10.1093/acrefore/9780190846626.013.270.

Повний текст джерела
Анотація:
A network may refer to “a group of interdependent actors and the relationships among them,” or to a set of nodes linked by a web of interdependencies. The concept of networks has its origins in earlier philosophical and sociological ideas such as Jean-Jacques Rousseau’s “general will” and Émile Durkheim’s “social facts”, which adressed social and political communities and how decisions are mediated and ideas are structured within them. Networks encompass a wide range of theoretical interpretations and critical applications across different disciplines, including governance networks, policy networks, public administration networks, social movement networks, intergovernmental networks, social networks, trade networks, computer networks, information networks, and neural networks. Governance networks have been proposed as alternative pluricentric governance models representing a new form of negotiated governance based on interdependence, negotiation and trust. Such networks differ from the competitive market regulation and state hierarchical control in three aspects: the relationship between the actors, decision-making processes, and compliance. The decision-making processes within governance networks are founded on a reflexive rationality rather than the “procedural rationality” which characterizes the competitive market regulation and the “substantial rationality” which underpins authoritative state regulation. Network theory has proved especially useful for scholars in positing the existence of loosely defined and informal webs of experts or advocates that can have a real and substantial influence on international relations discourse and policy. Two examples of the use of network theory in action are transnational advocacy networks and epistemic communities.
Стилі APA, Harvard, Vancouver, ISO та ін.

Частини книг з теми "Generative competitive neural network"

1

Yalçın, Orhan Gazi. "Generative Adversarial Network." In Applied Neural Networks with TensorFlow 2, 259–84. Berkeley, CA: Apress, 2020. http://dx.doi.org/10.1007/978-1-4842-6513-0_12.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Decaestecker, Christine. "Competitive Clustering." In International Neural Network Conference, 833. Dordrecht: Springer Netherlands, 1990. http://dx.doi.org/10.1007/978-94-009-0643-3_102.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Serrano, Will. "The Generative Adversarial Random Neural Network." In IFIP Advances in Information and Communication Technology, 567–80. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-79150-6_45.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Chiarantoni, Ernesto, Giuseppe Acciani, Girolamo Fornarelli, and Silvano Vergura. "Robust Unsupervised Competitive Neural Network by Local Competitive Signals." In Artificial Neural Networks — ICANN 2002, 963–68. Berlin, Heidelberg: Springer Berlin Heidelberg, 2002. http://dx.doi.org/10.1007/3-540-46084-5_156.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Zhao, Shuyang, and Jianwu Li. "Generating Low-Rank Textures via Generative Adversarial Network." In Neural Information Processing, 310–18. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-70090-8_32.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Muñoz-Pérez, J., M. A. García-Bernal, I. Ladrón de Guevara-Lòpez, and J. A. Gomez-Ruiz. "BICONN: A Binary Competitive Neural Network." In Computational Methods in Neural Modeling, 430–37. Berlin, Heidelberg: Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/3-540-44868-3_55.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Long, Theresa W., and Emil L. Hanzevack. "Hierarchical Competitive Net Architecture." In Neural Network Engineering in Dynamic Control Systems, 255–75. London: Springer London, 1995. http://dx.doi.org/10.1007/978-1-4471-3066-6_13.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

García-Bernal, M. A., J. Muñoz-Pérez, J. A. Gómez-Ruiz, and I. Ladrón de Guevara-López. "A Competitive Neural Network based on dipoles." In Computational Methods in Neural Modeling, 398–405. Berlin, Heidelberg: Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/3-540-44868-3_51.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Zhang, Dongyang, Jie Shao, Gang Hu, and Lianli Gao. "Sharp and Real Image Super-Resolution Using Generative Adversarial Network." In Neural Information Processing, 217–26. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-70090-8_23.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Bayramli, Bayram, Usman Ali, Te Qi, and Hongtao Lu. "FH-GAN: Face Hallucination and Recognition Using Generative Adversarial Network." In Neural Information Processing, 3–15. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-36708-4_1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "Generative competitive neural network"

1

Krivosheev, Nikolay, Ksenia Vik, Yulia Ivanova, and Vladimir Spitsyn. "Investigation of the Batch Size Influence on the Quality of Text Generation by the SeqGAN Neural Network." In 31th International Conference on Computer Graphics and Vision. Keldysh Institute of Applied Mathematics, 2021. http://dx.doi.org/10.20948/graphicon-2021-3027-1005-1010.

Повний текст джерела
Анотація:
One of the problems of text generation using the LSTM neural network is a decrease in the quality of generation with an increase in the length of the generated text. There are various solutions to improve the quality of text generation based on generative adversarial neural networks. This work uses preliminary training of the LSTM neural network based on the MLE approach and further training based on the SeqGAN neural network. Based on the presented results, we can conclude that the SeqGAN-based approach allows to increase the quality of text generation according to the NLL and BLEU metrics. The study of the influence of the batch size, in the process of competitive training of the SeqGAN neural network, on the quality of text generation has been carried out. It is shown that with an increase in the batch size, in the process of adversarial learning, the quality of LSTM neural network training increases. In this work, the Monte Carlo algorithm is not used in the training process of the SeqGAN neural network. For training and testing algorithms, image captions from the COCO Image Captions data sample are used. The quality of text generation based on the NLL and BLEU metrics has been assessed. Examples of the results of generating texts with an assessment of the quality of examples according to the BLEU metric are given,
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Ni, Yao, Dandan Song, Xi Zhang, Hao Wu, and Lejian Liao. "CAGAN: Consistent Adversarial Training Enhanced GANs." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/359.

Повний текст джерела
Анотація:
Generative adversarial networks (GANs) have shown impressive results, however, the generator and the discriminator are optimized in finite parameter space which means their performance still need to be improved. In this paper, we propose a novel approach of adversarial training between one generator and an exponential number of critics which are sampled from the original discriminative neural network via dropout. As discrepancy between outputs of different sub-networks of a same sample can measure the consistency of these critics, we encourage the critics to be consistent to real samples and inconsistent to generated samples during training, while the generator is trained to generate consistent samples for different critics. Experimental results demonstrate that our method can obtain state-of-the-art Inception scores of 9.17 and 10.02 on supervised CIFAR-10 and unsupervised STL-10 image generation tasks, respectively, as well as achieve competitive semi-supervised classification results on several benchmarks. Importantly, we demonstrate that our method can maintain stability in training and alleviate mode collapse.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Lu, Zhichao, Ian Whalen, Yashesh Dhebar, Kalyanmoy Deb, Erik Goodman, Wolfgang Banzhaf, and Vishnu Naresh Boddeti. "NSGA-Net: Neural Architecture Search using Multi-Objective Genetic Algorithm (Extended Abstract)." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/659.

Повний текст джерела
Анотація:
Convolutional neural networks (CNNs) are the backbones of deep learning paradigms for numerous vision tasks. Early advancements in CNN architectures are primarily driven by human expertise and elaborate design. Recently, neural architecture search (NAS) was proposed with the aim of automating the network design process and generating task-dependent architectures. This paper introduces NSGA-Net -- an evolutionary search algorithm that explores a space of potential neural network architectures in three steps, namely, a population initialization step that is based on prior-knowledge from hand-crafted architectures, an exploration step comprising crossover and mutation of architectures, and finally an exploitation step that utilizes the hidden useful knowledge stored in the entire history of evaluated neural architectures in the form of a Bayesian Network. The integration of these components allows an efficient design of architectures that are competitive and in many cases outperform both manually and automatically designed architectures on CIFAR-10 classification task. The flexibility provided from simultaneously obtaining multiple architecture choices for different compute requirements further differentiates our approach from other methods in the literature.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Padhy, Bibhu Prasad, and Barjeev Tyagi. "Artificial neural network based multi area Automatic Generation Control scheme for a competitive electricity market environment." In 2009 International Conference on Power Systems. IEEE, 2009. http://dx.doi.org/10.1109/icpws.2009.5442734.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Brati Favarin, Samuel, and Rafael Ballottin Martins. "Aplicação de Mineração de Dados para o Auxílio da Tomada de Decisão em Gestão de Pessoas." In Computer on the Beach. Itajaí: Universidade do Vale do Itajaí, 2020. http://dx.doi.org/10.14210/cotb.v11n1.p028-030.

Повний текст джерела
Анотація:
People are the foundation of organizations. For companies remain competitive, they need to develop and maintain their human resources. Professionals of the area, must rely on data to make their decisions, otherwise, it can generate bad decisions, taken only by intuition or experience. In this context, this project aimed to help the future decision making process made by human resource specialists of a People Management Software Company using the KDD process to generate new knowledge. In the data mining stage were used The Decision Tree, Neural Network, APRIORI and K-Means algorithms, generating patterns to be analysed with human resource specialists. Preliminary results demonstrate that it is possible to observe standards that classify employees as highly engaged, engaged, neutral and disengaged.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Liu, Chang, Fuchun Sun, Changhu Wang, Feng Wang, and Alan Yuille. "MAT: A Multimodal Attentive Translator for Image Captioning." In Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/563.

Повний текст джерела
Анотація:
In this work we formulate the problem of image captioning as a multimodal translation task. Analogous to machine translation, we present a sequence-to-sequence recurrent neural networks (RNN) model for image caption generation. Different from most existing work where the whole image is represented by convolutional neural network (CNN) feature, we propose to represent the input image as a sequence of detected objects which feeds as the source sequence of the RNN model. In this way, the sequential representation of an image can be naturally translated to a sequence of words, as the target sequence of the RNN model. To represent the image in a sequential way, we extract the objects features in the image and arrange them in a order using convolutional neural networks. To further leverage the visual information from the encoded objects, a sequential attention layer is introduced to selectively attend to the objects that are related to generate corresponding words in the sentences. Extensive experiments are conducted to validate the proposed approach on popular benchmark dataset, i.e., MS COCO, and the proposed model surpasses the state-of-the-art methods in all metrics following the dataset splits of previous work. The proposed approach is also evaluated by the evaluation server of MS COCO captioning challenge, and achieves very competitive results, e.g., a CIDEr of 1.029 (c5) and 1.064 (c40).
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Bai, Yunsheng, Hao Ding, Yang Qiao, Agustin Marinovic, Ken Gu, Ting Chen, Yizhou Sun, and Wei Wang. "Unsupervised Inductive Graph-Level Representation Learning via Graph-Graph Proximity." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/275.

Повний текст джерела
Анотація:
We introduce a novel approach to graph-level representation learning, which is to embed an entire graph into a vector space where the embeddings of two graphs preserve their graph-graph proximity. Our approach, UGraphEmb, is a general framework that provides a novel means to performing graph-level embedding in a completely unsupervised and inductive manner. The learned neural network can be considered as a function that receives any graph as input, either seen or unseen in the training set, and transforms it into an embedding. A novel graph-level embedding generation mechanism called Multi-Scale Node Attention (MSNA), is proposed. Experiments on five real graph datasets show that UGraphEmb achieves competitive accuracy in the tasks of graph classification, similarity ranking, and graph visualization.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Cleveston, Iury, and Esther L. Colombini. "RAM-VO: A Recurrent Attentional Model for Visual Odometry." In Anais Estendidos do Simpósio Brasileiro de Robótica e Simpósio Latino Americano de Robótica. Sociedade Brasileira de Computação, 2021. http://dx.doi.org/10.5753/wtdr_ctdr.2021.18684.

Повний текст джерела
Анотація:
Determining the agent's pose is fundamental for developing autonomous vehicles. Visual Odometry (VO) algorithms estimate the egomotion using only visual differences from the input frames. The most recent VO methods implement deep-learning techniques using convolutional neural networks (CNN) widely, adding a high cost to process large images. Also, more data does not imply a better prediction, and the network may have to filter out useless information. In this context, we incrementally formulate a lightweight model called RAM-VO to perform visual odometry regressions using large monocular images. Our model is extended from the Recurrent Attention Model (RAM), which has emerged as a unique architecture that implements a hard attentional mechanism guided by reinforcement learning to select the essential input information. Our methodology modifies the RAM and improves the visual and temporal representation of information, generating the intermediary RAM-R and RAM-RC architectures. Also, we include the optical flow as contextual information for initializing the RL agent and implement the Proximal Policy Optimization (PPO) algorithm to learn a robust policy. The experimental results indicate that RAM-VO can perform regressions with six degrees of freedom using approximately 3 million parameters. Additionally, experiments on the KITTI dataset confirm that RAM-VO produces competitive results using only 5.7% of the input image.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Borges, Helyane Bronoski, and Julio Cesar Nievola. "Hierarchical classification using a Competitive Neural Network." In 2012 8th International Conference on Natural Computation (ICNC). IEEE, 2012. http://dx.doi.org/10.1109/icnc.2012.6234573.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Zhai, Zhonghua, and Jian Zhai. "Identity-preserving Conditional Generative Adversarial Network." In 2018 International Joint Conference on Neural Networks (IJCNN). IEEE, 2018. http://dx.doi.org/10.1109/ijcnn.2018.8489282.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії