Academic literature on the topic 'Generative competitive neural network'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Generative competitive neural network.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Generative competitive neural network"
Kuznetsov, A. V., and M. V. Gashnikov. "Remote sensing data retouching based on image inpainting algorithms in the forgery generation problem." Computer Optics 44, no. 5 (October 2020): 763–71. http://dx.doi.org/10.18287/2412-6179-co-721.
Full textTsibulis, Dmitry E., Andrey N. Ragozin, Stanislav N. Darovskikh, and Askar Z. Kulganatov. "Study of nonlinear digital filtering of signals using generative competitive neural network." Bulletin of the South Ural State University. Ser. Computer Technologies, Automatic Control & Radioelectronics 22, no. 2 (April 2022): 158–67. http://dx.doi.org/10.14529/ctcr220215.
Full textMarton, Sascha, Stefan Lüdtke, and Christian Bartelt. "Explanations for Neural Networks by Neural Networks." Applied Sciences 12, no. 3 (January 18, 2022): 980. http://dx.doi.org/10.3390/app12030980.
Full textShin, Wonsup, Seok-Jun Bu, and Sung-Bae Cho. "3D-Convolutional Neural Network with Generative Adversarial Network and Autoencoder for Robust Anomaly Detection in Video Surveillance." International Journal of Neural Systems 30, no. 06 (May 28, 2020): 2050034. http://dx.doi.org/10.1142/s0129065720500343.
Full textForster, Dennis, Abdul-Saboor Sheikh, and Jörg Lücke. "Neural Simpletrons: Learning in the Limit of Few Labels with Directed Generative Networks." Neural Computation 30, no. 8 (August 2018): 2113–74. http://dx.doi.org/10.1162/neco_a_01100.
Full textWang, Zheng, and Qingbiao Wu. "An Integrated Deep Generative Model for Text Classification and Generation." Mathematical Problems in Engineering 2018 (August 19, 2018): 1–8. http://dx.doi.org/10.1155/2018/7529286.
Full textHuang, Wenlong, Brian Lai, Weijian Xu, and Zhuowen Tu. "3D Volumetric Modeling with Introspective Neural Networks." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 8481–88. http://dx.doi.org/10.1609/aaai.v33i01.33018481.
Full textV.M., Sineglazov, and Chumachenko O.I. "Structural-parametric synthesis of deep learning neural networks." Artificial Intelligence 25, no. 4 (December 25, 2020): 42–51. http://dx.doi.org/10.15407/jai2020.04.042.
Full textLee, Byong Kwon. "A Combining AI Algorithm for the Restoration of Damaged Cultural Properties." Webology 19, no. 1 (January 20, 2022): 4384–95. http://dx.doi.org/10.14704/web/v19i1/web19288.
Full textWang, Zilin, Zhaoxiang Zhang, Limin Dong, and Guodong Xu. "Jitter Detection and Image Restoration Based on Generative Adversarial Networks in Satellite Images." Sensors 21, no. 14 (July 9, 2021): 4693. http://dx.doi.org/10.3390/s21144693.
Full textDissertations / Theses on the topic "Generative competitive neural network"
Гайдук, Ірина Вадимівна. "Вирішення транспортної задачі методами машинного навчання." Master's thesis, КПІ ім. Ігоря Сікорського, 2021. https://ela.kpi.ua/handle/123456789/46504.
Full textMaster’s thesis: 87 pages, 27 figures, 24 tables, 21 sources. Theme: The classical problem of optimal transportation. The conducted research solves it by known methods, their advantages and disadvantages, the necessary conditions for the existence of an optimal solution. This was a proposed machine method for solving problems with the construction and model of learning based on a generative neural network. The paper considered general information on the method of solving the problem of optimal transportation with its unbalance and scalability. The results of three different types of problems solved by the machine learning method were analyzed. The subject of the study is the classical problem of optimal transportation in three different types. The subject of research is the methods of machine learning, in particular the generative competitive neural network.
Liu, Mengxin. "Generative Neural Network for Portfolio Optimization." Thesis, Mälardalens högskola, Akademin för utbildning, kultur och kommunikation, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-53027.
Full textYamazaki, Hiroyuki Vincent. "On Depth and Complexity of Generative Adversarial Networks." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-217293.
Full textTrots att Generative Adversarial Networks (GAN) har lyckats generera realistiska bilder består de än idag av neurala nätverk som är parametriserade med relativt få tränbara vikter jämfört med neurala nätverk som används för klassificering. Vi tror att en sådan modell är suboptimal vad gäller generering av högdimensionell och komplicerad data och anser att modeller med högre kapaciteter bör ge bättre estimeringar. Dessutom, i en generativ uppgift så förväntas en modell kunna extrapolera information från lägre till högre dimensioner medan i en klassificeringsuppgift så behöver modellen endast att extrahera lågdimensionell information från högdimensionell data. Vi evaluerar ett flertal GAN med varierande kapaciteter genom att använda shortcut connections för att studera hur kapaciteten påverkar träningsstabiliteten, samt kvaliteten av de genererade datapunkterna. Resultaten visar att träningen blir mindre stabil för modeller som fått högre kapaciteter genom naivt tillsatta lager men visar samtidigt att datapunkternas kvaliteter kan öka, specifikt för bilder, bilder med hög visuell fidelitet. Detta åstadkoms med hjälp utav regularisering och noggrann balansering.
Aftab, Nadeem. "Disocclusion Inpainting using Generative Adversarial Networks." Thesis, Mittuniversitetet, Institutionen för informationssystem och –teknologi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-40502.
Full textAmartur, Sundar C. "Competitive recurrent neural network model for clustering of multispectral data." Case Western Reserve University School of Graduate Studies / OhioLINK, 1995. http://rave.ohiolink.edu/etdc/view?acc_num=case1058445974.
Full textDaley, Jr John. "Generating Synthetic Schematics with Generative Adversarial Networks." Thesis, Högskolan Kristianstad, Fakulteten för naturvetenskap, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:hkr:diva-20901.
Full textIonascu, Beatrice. "Modelling user interaction at scale with deep generative methods." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-239333.
Full textFörståelse för hur användare interagerar med ett företags tjänst är essentiell för data-drivna affärsverksamheter med ambitioner om att bättre tillgodose dess användare och att förbättra deras utbud. Generativ maskininlärning möjliggör modellering av användarbeteende och genererande av ny data i syfte att simulera eller identifiera och förklara typiska användarmönster. I detta arbete introducerar vi ett tillvägagångssätt för storskalig modellering av användarinteraktion i en klientservice-modell. Vi föreslår en ny representation av multivariat tidsseriedata i form av tidsbilder vilka representerar temporala korrelationer via spatial organisering. Denna representation delar två nyckelegenskaper som faltningsnätverk har utvecklats för att exploatera, vilket tillåter oss att utveckla ett tillvägagångssätt baserat på på djupa generativa modeller som bygger på faltningsnätverk. Genom att introducera detta tillvägagångssätt för tidsseriedata expanderar vi applicering av faltningsnätverk inom domänen för multivariat tidsserie, specifikt för användarinteraktionsdata. Vi använder ett tillvägagångssätt inspirerat av ramverket β-VAE i syfte att lära modellen gömda faktorer som definierar olika användarmönster. Vi utforskar olika värden för regulariseringsparametern β och visar att det är möjligt att konstruera en modell som lär sig en latent representation av identifierbara och multipla användarbeteenden. Vi visar med verklig data att modellen genererar realistiska exempel vilka i sin tur fångar statistiken på populationsnivå hos användarinteraktionsdatan, samt lär olika användarbeteenden och bidrar med precisa imputationer av saknad data.
Pagliarini, Silvia. "Modeling the neural network responsible for song learning." Thesis, Bordeaux, 2021. http://www.theses.fr/2021BORD0107.
Full textDuring the first period of their life, babies and juvenile birds show comparable phases of vocal development: first, they listen to their parents/tutors in order to build a neural representation of the experienced auditory stimulus, then they start to produce sound and progressively get closer to reproducing their tutor song. This phase of learning is called the sensorimotor phase and is characterized by the presence of babbling, in babies, and subsong, in birds. It ends when the song crystallizes and becomes similar to the one produced by the adults.It is possible to find analogies between brain pathways responsible for sensorimotor learning in humans and birds: a vocal production pathway involves direct projections from auditory areas to motor neurons, and a vocal learning pathway is responsible for imitation and plasticity. The behavioral studies and the neuroanatomical structure of the vocal control circuit in humans and birds provide the basis for bio-inspired models of vocal learning.In particular, birds have brain circuits exclusively dedicated to song learning, making them an ideal model for exploring the representation of vocal learning by imitation of tutors.This thesis aims to build a vocal learning model underlying song learning in birds. An extensive review of the existing literature is discussed in the thesis: many previous studies have attempted to implement imitative learning in computational models and share a common structure. These learning architectures include the learning mechanisms and, eventually, exploration and evaluation strategies. A motor control function enables sound production and sensory response models either how sound is perceived or how it shapes the reward. The inputs and outputs of these functions lie (1)~in the motor space (motor parameters’ space), (2)~in the sensory space (real sounds) and (3)~either in the perceptual space (a low dimensional representation of the sound) or in the internal representation of goals (a non-perceptual representation of the target sound).The first model proposed in this thesis is a theoretical inverse model based on a simplified vocal learning model where the sensory space coincides with the motor space (i.e., there is no sound production). Such a simplification allows us to investigate how to introduce biological assumptions (e.g. non-linearity response) into a vocal learning model and which parameters influence the computational power of the model the most. The influence of the sharpness of auditory selectivity and the motor dimension are discussed.To have a complete model (which is able to perceive and produce sound), we needed a motor control function capable of reproducing sounds similar to real data (e.g. recordings of adult canaries). We analyzed the capability of WaveGAN (a Generative Adversarial Network) to provide a generator model able to produce realistic canary songs. In this generator model, the input space becomes the latent space after training and allows the representation of a high-dimensional dataset in a lower-dimensional manifold. We obtained realistic canary sounds using only three dimensions for the latent space. Among other results, quantitative and qualitative analyses demonstrate the interpolation abilities of the model, which suggests that the generator model we studied can be used as a motor function in a vocal learning model.The second version of the sensorimotor model is a complete vocal learning model with a full action-perception loop (i.e., it includes motor space, sensory space, and perceptual space). The sound production is performed by the GAN generator previously obtained. A recurrent neural network classifying syllables serves as the perceptual sensory response. Similar to the first model, the mapping between the perceptual space and the motor space is learned via an inverse model. Preliminary results show the influence of the learning rate when different sensory response functions are implemented
Gustafsson, Alexander, and Jonatan Linberg. "Investigation of generative adversarial network training : The effect of hyperparameters on training time and stability." Thesis, Högskolan i Skövde, Institutionen för informationsteknologi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-19847.
Full textZheng, Yilin. "Text-Based Speech Video Synthesis from a Single Face Image." The Ohio State University, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=osu1572168353691788.
Full textBooks on the topic "Generative competitive neural network"
Panzironi, Francesca. Networks. Oxford University Press, 2017. http://dx.doi.org/10.1093/acrefore/9780190846626.013.270.
Full textBook chapters on the topic "Generative competitive neural network"
Yalçın, Orhan Gazi. "Generative Adversarial Network." In Applied Neural Networks with TensorFlow 2, 259–84. Berkeley, CA: Apress, 2020. http://dx.doi.org/10.1007/978-1-4842-6513-0_12.
Full textDecaestecker, Christine. "Competitive Clustering." In International Neural Network Conference, 833. Dordrecht: Springer Netherlands, 1990. http://dx.doi.org/10.1007/978-94-009-0643-3_102.
Full textSerrano, Will. "The Generative Adversarial Random Neural Network." In IFIP Advances in Information and Communication Technology, 567–80. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-79150-6_45.
Full textChiarantoni, Ernesto, Giuseppe Acciani, Girolamo Fornarelli, and Silvano Vergura. "Robust Unsupervised Competitive Neural Network by Local Competitive Signals." In Artificial Neural Networks — ICANN 2002, 963–68. Berlin, Heidelberg: Springer Berlin Heidelberg, 2002. http://dx.doi.org/10.1007/3-540-46084-5_156.
Full textZhao, Shuyang, and Jianwu Li. "Generating Low-Rank Textures via Generative Adversarial Network." In Neural Information Processing, 310–18. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-70090-8_32.
Full textMuñoz-Pérez, J., M. A. García-Bernal, I. Ladrón de Guevara-Lòpez, and J. A. Gomez-Ruiz. "BICONN: A Binary Competitive Neural Network." In Computational Methods in Neural Modeling, 430–37. Berlin, Heidelberg: Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/3-540-44868-3_55.
Full textLong, Theresa W., and Emil L. Hanzevack. "Hierarchical Competitive Net Architecture." In Neural Network Engineering in Dynamic Control Systems, 255–75. London: Springer London, 1995. http://dx.doi.org/10.1007/978-1-4471-3066-6_13.
Full textGarcía-Bernal, M. A., J. Muñoz-Pérez, J. A. Gómez-Ruiz, and I. Ladrón de Guevara-López. "A Competitive Neural Network based on dipoles." In Computational Methods in Neural Modeling, 398–405. Berlin, Heidelberg: Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/3-540-44868-3_51.
Full textZhang, Dongyang, Jie Shao, Gang Hu, and Lianli Gao. "Sharp and Real Image Super-Resolution Using Generative Adversarial Network." In Neural Information Processing, 217–26. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-70090-8_23.
Full textBayramli, Bayram, Usman Ali, Te Qi, and Hongtao Lu. "FH-GAN: Face Hallucination and Recognition Using Generative Adversarial Network." In Neural Information Processing, 3–15. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-36708-4_1.
Full textConference papers on the topic "Generative competitive neural network"
Krivosheev, Nikolay, Ksenia Vik, Yulia Ivanova, and Vladimir Spitsyn. "Investigation of the Batch Size Influence on the Quality of Text Generation by the SeqGAN Neural Network." In 31th International Conference on Computer Graphics and Vision. Keldysh Institute of Applied Mathematics, 2021. http://dx.doi.org/10.20948/graphicon-2021-3027-1005-1010.
Full textNi, Yao, Dandan Song, Xi Zhang, Hao Wu, and Lejian Liao. "CAGAN: Consistent Adversarial Training Enhanced GANs." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/359.
Full textLu, Zhichao, Ian Whalen, Yashesh Dhebar, Kalyanmoy Deb, Erik Goodman, Wolfgang Banzhaf, and Vishnu Naresh Boddeti. "NSGA-Net: Neural Architecture Search using Multi-Objective Genetic Algorithm (Extended Abstract)." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/659.
Full textPadhy, Bibhu Prasad, and Barjeev Tyagi. "Artificial neural network based multi area Automatic Generation Control scheme for a competitive electricity market environment." In 2009 International Conference on Power Systems. IEEE, 2009. http://dx.doi.org/10.1109/icpws.2009.5442734.
Full textBrati Favarin, Samuel, and Rafael Ballottin Martins. "Aplicação de Mineração de Dados para o Auxílio da Tomada de Decisão em Gestão de Pessoas." In Computer on the Beach. Itajaí: Universidade do Vale do Itajaí, 2020. http://dx.doi.org/10.14210/cotb.v11n1.p028-030.
Full textLiu, Chang, Fuchun Sun, Changhu Wang, Feng Wang, and Alan Yuille. "MAT: A Multimodal Attentive Translator for Image Captioning." In Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/563.
Full textBai, Yunsheng, Hao Ding, Yang Qiao, Agustin Marinovic, Ken Gu, Ting Chen, Yizhou Sun, and Wei Wang. "Unsupervised Inductive Graph-Level Representation Learning via Graph-Graph Proximity." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/275.
Full textCleveston, Iury, and Esther L. Colombini. "RAM-VO: A Recurrent Attentional Model for Visual Odometry." In Anais Estendidos do Simpósio Brasileiro de Robótica e Simpósio Latino Americano de Robótica. Sociedade Brasileira de Computação, 2021. http://dx.doi.org/10.5753/wtdr_ctdr.2021.18684.
Full textBorges, Helyane Bronoski, and Julio Cesar Nievola. "Hierarchical classification using a Competitive Neural Network." In 2012 8th International Conference on Natural Computation (ICNC). IEEE, 2012. http://dx.doi.org/10.1109/icnc.2012.6234573.
Full textZhai, Zhonghua, and Jian Zhai. "Identity-preserving Conditional Generative Adversarial Network." In 2018 International Joint Conference on Neural Networks (IJCNN). IEEE, 2018. http://dx.doi.org/10.1109/ijcnn.2018.8489282.
Full text